Nov 12 20:47:39.323619 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:47:39.323663 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:39.323685 kernel: BIOS-provided physical RAM map: Nov 12 20:47:39.323698 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:47:39.323709 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:47:39.323722 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:47:39.323736 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 12 20:47:39.323749 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 12 20:47:39.323761 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:47:39.323777 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:47:39.323798 kernel: NX (Execute Disable) protection: active Nov 12 20:47:39.323810 kernel: APIC: Static calls initialized Nov 12 20:47:39.323823 kernel: SMBIOS 2.8 present. Nov 12 20:47:39.323836 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 12 20:47:39.323852 kernel: Hypervisor detected: KVM Nov 12 20:47:39.323869 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:47:39.323882 kernel: kvm-clock: using sched offset of 3780314563 cycles Nov 12 20:47:39.323901 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:47:39.323915 kernel: tsc: Detected 1999.997 MHz processor Nov 12 20:47:39.323949 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:47:39.323963 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:47:39.323977 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 12 20:47:39.323990 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:47:39.324002 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:47:39.324019 kernel: ACPI: Early table checksum verification disabled Nov 12 20:47:39.324032 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 12 20:47:39.324045 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324057 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324069 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324082 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 12 20:47:39.324095 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324109 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324122 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324140 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:39.324154 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 12 20:47:39.324168 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 12 20:47:39.324182 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 12 20:47:39.324194 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 12 20:47:39.327664 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 12 20:47:39.327698 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 12 20:47:39.327730 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 12 20:47:39.327738 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:47:39.327746 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:47:39.327754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:47:39.327777 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:47:39.327785 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 12 20:47:39.327793 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 12 20:47:39.327804 kernel: Zone ranges: Nov 12 20:47:39.327812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:47:39.327820 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 12 20:47:39.327836 kernel: Normal empty Nov 12 20:47:39.327844 kernel: Movable zone start for each node Nov 12 20:47:39.327852 kernel: Early memory node ranges Nov 12 20:47:39.327859 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:47:39.327867 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 12 20:47:39.327874 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 12 20:47:39.327885 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:47:39.327896 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:47:39.327904 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 12 20:47:39.327911 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:47:39.327933 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:47:39.327942 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:47:39.327954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:47:39.327966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:47:39.327978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:47:39.327994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:47:39.328001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:47:39.328009 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:47:39.328016 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:47:39.328024 kernel: TSC deadline timer available Nov 12 20:47:39.328032 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:47:39.328039 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:47:39.328047 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 12 20:47:39.328054 kernel: Booting paravirtualized kernel on KVM Nov 12 20:47:39.328067 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:47:39.328077 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:47:39.328086 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:47:39.328098 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:47:39.328109 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:47:39.328121 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 12 20:47:39.328131 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:39.328140 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:47:39.328151 kernel: random: crng init done Nov 12 20:47:39.328159 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:47:39.328167 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:47:39.328175 kernel: Fallback order for Node 0: 0 Nov 12 20:47:39.328182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 12 20:47:39.328190 kernel: Policy zone: DMA32 Nov 12 20:47:39.328197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:47:39.328206 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Nov 12 20:47:39.328213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:47:39.328223 kernel: Kernel/User page tables isolation: enabled Nov 12 20:47:39.328231 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:47:39.328239 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:47:39.328247 kernel: Dynamic Preempt: voluntary Nov 12 20:47:39.328254 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:47:39.328263 kernel: rcu: RCU event tracing is enabled. Nov 12 20:47:39.328271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:47:39.328278 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:47:39.328286 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:47:39.328296 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:47:39.328304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:47:39.328312 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:47:39.328319 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:47:39.328331 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:47:39.328339 kernel: Console: colour VGA+ 80x25 Nov 12 20:47:39.328346 kernel: printk: console [tty0] enabled Nov 12 20:47:39.328354 kernel: printk: console [ttyS0] enabled Nov 12 20:47:39.328362 kernel: ACPI: Core revision 20230628 Nov 12 20:47:39.328370 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:47:39.328380 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:47:39.328388 kernel: x2apic enabled Nov 12 20:47:39.328399 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:47:39.328410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:47:39.328422 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Nov 12 20:47:39.328433 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 12 20:47:39.328444 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 12 20:47:39.328457 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 12 20:47:39.328484 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:47:39.328497 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:47:39.328511 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:47:39.328528 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:47:39.328540 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 12 20:47:39.328552 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:47:39.328561 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:47:39.328569 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:47:39.328578 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:47:39.328596 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:47:39.328608 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:47:39.328622 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:47:39.328634 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:47:39.328650 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:47:39.328661 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:47:39.328673 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:47:39.328685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:47:39.328703 kernel: landlock: Up and running. Nov 12 20:47:39.328715 kernel: SELinux: Initializing. Nov 12 20:47:39.328727 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:47:39.328739 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:47:39.328757 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 12 20:47:39.328769 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:39.328783 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:39.328794 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:39.328805 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 12 20:47:39.328822 kernel: signal: max sigframe size: 1776 Nov 12 20:47:39.328833 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:47:39.328846 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:47:39.328857 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:47:39.328869 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:47:39.328881 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:47:39.328903 kernel: .... node #0, CPUs: #1 Nov 12 20:47:39.328916 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:47:39.328954 kernel: smpboot: Max logical packages: 1 Nov 12 20:47:39.328974 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 12 20:47:39.328986 kernel: devtmpfs: initialized Nov 12 20:47:39.328999 kernel: x86/mm: Memory block size: 128MB Nov 12 20:47:39.329014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:47:39.329028 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:47:39.329040 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:47:39.329052 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:47:39.329064 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:47:39.329077 kernel: audit: type=2000 audit(1731444458.001:1): state=initialized audit_enabled=0 res=1 Nov 12 20:47:39.329096 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:47:39.329108 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:47:39.329119 kernel: cpuidle: using governor menu Nov 12 20:47:39.329132 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:47:39.329143 kernel: dca service started, version 1.12.1 Nov 12 20:47:39.329155 kernel: PCI: Using configuration type 1 for base access Nov 12 20:47:39.329167 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:47:39.329181 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:47:39.329194 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:47:39.329211 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:47:39.329223 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:47:39.329236 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:47:39.329249 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:47:39.329264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:47:39.329278 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:47:39.329293 kernel: ACPI: Interpreter enabled Nov 12 20:47:39.329308 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:47:39.329322 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:47:39.329340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:47:39.329355 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:47:39.329369 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 12 20:47:39.329384 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:47:39.334744 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:47:39.335023 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:47:39.335196 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:47:39.335222 kernel: acpiphp: Slot [3] registered Nov 12 20:47:39.335236 kernel: acpiphp: Slot [4] registered Nov 12 20:47:39.335432 kernel: acpiphp: Slot [5] registered Nov 12 20:47:39.335453 kernel: acpiphp: Slot [6] registered Nov 12 20:47:39.335467 kernel: acpiphp: Slot [7] registered Nov 12 20:47:39.335480 kernel: acpiphp: Slot [8] registered Nov 12 20:47:39.335494 kernel: acpiphp: Slot [9] registered Nov 12 20:47:39.335508 kernel: acpiphp: Slot [10] registered Nov 12 20:47:39.335520 kernel: acpiphp: Slot [11] registered Nov 12 20:47:39.335542 kernel: acpiphp: Slot [12] registered Nov 12 20:47:39.335557 kernel: acpiphp: Slot [13] registered Nov 12 20:47:39.335570 kernel: acpiphp: Slot [14] registered Nov 12 20:47:39.335582 kernel: acpiphp: Slot [15] registered Nov 12 20:47:39.335597 kernel: acpiphp: Slot [16] registered Nov 12 20:47:39.335611 kernel: acpiphp: Slot [17] registered Nov 12 20:47:39.335627 kernel: acpiphp: Slot [18] registered Nov 12 20:47:39.335642 kernel: acpiphp: Slot [19] registered Nov 12 20:47:39.335657 kernel: acpiphp: Slot [20] registered Nov 12 20:47:39.335669 kernel: acpiphp: Slot [21] registered Nov 12 20:47:39.335685 kernel: acpiphp: Slot [22] registered Nov 12 20:47:39.335697 kernel: acpiphp: Slot [23] registered Nov 12 20:47:39.335711 kernel: acpiphp: Slot [24] registered Nov 12 20:47:39.335804 kernel: acpiphp: Slot [25] registered Nov 12 20:47:39.335821 kernel: acpiphp: Slot [26] registered Nov 12 20:47:39.335886 kernel: acpiphp: Slot [27] registered Nov 12 20:47:39.335901 kernel: acpiphp: Slot [28] registered Nov 12 20:47:39.335914 kernel: acpiphp: Slot [29] registered Nov 12 20:47:39.336013 kernel: acpiphp: Slot [30] registered Nov 12 20:47:39.336083 kernel: acpiphp: Slot [31] registered Nov 12 20:47:39.336106 kernel: PCI host bridge to bus 0000:00 Nov 12 20:47:39.336684 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:47:39.336823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:47:39.340185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:47:39.340380 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:47:39.340518 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 12 20:47:39.340652 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:47:39.340871 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:47:39.341062 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:47:39.341250 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 12 20:47:39.341414 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 12 20:47:39.341563 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:47:39.341748 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:47:39.341908 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:47:39.343280 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:47:39.343447 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 12 20:47:39.343593 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 12 20:47:39.343773 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:47:39.345097 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 12 20:47:39.345301 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 12 20:47:39.345468 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 12 20:47:39.345606 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 12 20:47:39.345791 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 12 20:47:39.347272 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 12 20:47:39.347487 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:47:39.347640 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:47:39.347825 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:47:39.348026 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 12 20:47:39.348219 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 12 20:47:39.348378 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 12 20:47:39.348547 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:47:39.348704 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 12 20:47:39.348866 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 12 20:47:39.358262 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 12 20:47:39.358496 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 12 20:47:39.358647 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 12 20:47:39.358795 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 12 20:47:39.358978 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 12 20:47:39.359152 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:47:39.359310 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:47:39.359478 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 12 20:47:39.359627 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 12 20:47:39.359796 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:47:39.359974 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 12 20:47:39.360121 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 12 20:47:39.360266 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 12 20:47:39.360428 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 12 20:47:39.360590 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 12 20:47:39.360734 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 12 20:47:39.360753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:47:39.360768 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:47:39.360783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:47:39.360799 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:47:39.360814 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:47:39.360834 kernel: iommu: Default domain type: Translated Nov 12 20:47:39.360849 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:47:39.360864 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:47:39.360879 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:47:39.360894 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:47:39.360909 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 12 20:47:39.361778 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 12 20:47:39.361956 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 12 20:47:39.362100 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:47:39.362127 kernel: vgaarb: loaded Nov 12 20:47:39.362143 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:47:39.362158 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:47:39.362173 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:47:39.362188 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:47:39.362202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:47:39.362215 kernel: pnp: PnP ACPI init Nov 12 20:47:39.362228 kernel: pnp: PnP ACPI: found 4 devices Nov 12 20:47:39.362239 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:47:39.362255 kernel: NET: Registered PF_INET protocol family Nov 12 20:47:39.362267 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:47:39.362280 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:47:39.362292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:47:39.362303 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:47:39.362313 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:47:39.362326 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:47:39.362338 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:47:39.362351 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:47:39.362366 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:47:39.362378 kernel: NET: Registered PF_XDP protocol family Nov 12 20:47:39.362523 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:47:39.362649 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:47:39.362776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:47:39.362905 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:47:39.365193 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 12 20:47:39.365362 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 12 20:47:39.365524 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:47:39.365545 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:47:39.365701 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42862 usecs Nov 12 20:47:39.365719 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:47:39.365734 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:47:39.365750 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Nov 12 20:47:39.365765 kernel: Initialise system trusted keyrings Nov 12 20:47:39.365780 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:47:39.365801 kernel: Key type asymmetric registered Nov 12 20:47:39.365815 kernel: Asymmetric key parser 'x509' registered Nov 12 20:47:39.365830 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:47:39.365845 kernel: io scheduler mq-deadline registered Nov 12 20:47:39.365859 kernel: io scheduler kyber registered Nov 12 20:47:39.365874 kernel: io scheduler bfq registered Nov 12 20:47:39.365888 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:47:39.365904 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 12 20:47:39.365938 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:47:39.365952 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:47:39.365972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:47:39.365987 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:47:39.366001 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:47:39.366016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:47:39.366031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:47:39.366312 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 12 20:47:39.366451 kernel: rtc_cmos 00:03: registered as rtc0 Nov 12 20:47:39.366588 kernel: rtc_cmos 00:03: setting system clock to 2024-11-12T20:47:38 UTC (1731444458) Nov 12 20:47:39.366732 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 12 20:47:39.366750 kernel: intel_pstate: CPU model not supported Nov 12 20:47:39.366765 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:47:39.366780 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:47:39.366795 kernel: Segment Routing with IPv6 Nov 12 20:47:39.366810 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:47:39.366824 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:47:39.366839 kernel: Key type dns_resolver registered Nov 12 20:47:39.366859 kernel: IPI shorthand broadcast: enabled Nov 12 20:47:39.366876 kernel: sched_clock: Marking stable (1530007461, 174795653)->(1867223966, -162420852) Nov 12 20:47:39.366891 kernel: registered taskstats version 1 Nov 12 20:47:39.366906 kernel: Loading compiled-in X.509 certificates Nov 12 20:47:39.368947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:47:39.368967 kernel: Key type .fscrypt registered Nov 12 20:47:39.368983 kernel: Key type fscrypt-provisioning registered Nov 12 20:47:39.368999 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:47:39.369014 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:47:39.369036 kernel: ima: No architecture policies found Nov 12 20:47:39.369050 kernel: clk: Disabling unused clocks Nov 12 20:47:39.369064 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:47:39.369080 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:47:39.369117 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:47:39.369136 kernel: Run /init as init process Nov 12 20:47:39.369151 kernel: with arguments: Nov 12 20:47:39.369166 kernel: /init Nov 12 20:47:39.369181 kernel: with environment: Nov 12 20:47:39.369200 kernel: HOME=/ Nov 12 20:47:39.369214 kernel: TERM=linux Nov 12 20:47:39.369229 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:47:39.369247 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:47:39.369267 systemd[1]: Detected virtualization kvm. Nov 12 20:47:39.369284 systemd[1]: Detected architecture x86-64. Nov 12 20:47:39.369299 systemd[1]: Running in initrd. Nov 12 20:47:39.369315 systemd[1]: No hostname configured, using default hostname. Nov 12 20:47:39.369334 systemd[1]: Hostname set to . Nov 12 20:47:39.369351 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:47:39.369367 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:47:39.369383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:39.369400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:39.369417 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:47:39.369433 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:47:39.369449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:47:39.369470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:47:39.369489 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:47:39.369506 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:47:39.369522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:39.369538 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:39.369554 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:47:39.369575 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:47:39.369592 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:47:39.369612 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:47:39.369628 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:47:39.369645 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:47:39.369723 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:47:39.369744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:47:39.369760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:39.369776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:39.369793 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:39.369809 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:47:39.369826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:47:39.369842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:47:39.369858 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:47:39.369878 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:47:39.369894 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:47:39.369910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:47:39.369999 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:47:39.370055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:39.370083 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:47:39.370112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:39.370131 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:47:39.370159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:47:39.370188 systemd-journald[183]: Journal started Nov 12 20:47:39.370223 systemd-journald[183]: Runtime Journal (/run/log/journal/961b14bbfd4e428284b0864e906debf8) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:47:39.372058 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:47:39.429996 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:47:39.430037 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:47:39.430073 kernel: Bridge firewalling registered Nov 12 20:47:39.419564 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:47:39.430133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:39.431769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:39.433606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:39.454315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:39.458393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:47:39.462235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:47:39.466345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:47:39.496312 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:39.500657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:39.503040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:39.504276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:39.513386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:47:39.518194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:47:39.538079 dracut-cmdline[217]: dracut-dracut-053 Nov 12 20:47:39.571395 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:39.627943 systemd-resolved[218]: Positive Trust Anchors: Nov 12 20:47:39.627960 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:47:39.628018 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:47:39.632651 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 12 20:47:39.634774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:47:39.637832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:39.793122 kernel: SCSI subsystem initialized Nov 12 20:47:39.808059 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:47:39.846959 kernel: iscsi: registered transport (tcp) Nov 12 20:47:39.889086 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:47:39.889293 kernel: QLogic iSCSI HBA Driver Nov 12 20:47:39.999882 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:47:40.017233 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:47:40.058232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:47:40.058340 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:47:40.060549 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:47:40.139067 kernel: raid6: avx2x4 gen() 19869 MB/s Nov 12 20:47:40.156010 kernel: raid6: avx2x2 gen() 19152 MB/s Nov 12 20:47:40.174335 kernel: raid6: avx2x1 gen() 13929 MB/s Nov 12 20:47:40.174417 kernel: raid6: using algorithm avx2x4 gen() 19869 MB/s Nov 12 20:47:40.193973 kernel: raid6: .... xor() 7129 MB/s, rmw enabled Nov 12 20:47:40.194075 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:47:40.225150 kernel: xor: automatically using best checksumming function avx Nov 12 20:47:40.467741 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:47:40.487840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:47:40.495352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:40.536262 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 12 20:47:40.543170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:40.559165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:47:40.607506 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 12 20:47:40.663468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:47:40.674217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:47:40.744431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:40.756355 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:47:40.793978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:47:40.799250 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:47:40.800194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:40.804096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:47:40.812733 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:47:40.836416 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:47:40.898713 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:47:40.916981 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 12 20:47:40.965435 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 12 20:47:40.965692 kernel: ACPI: bus type USB registered Nov 12 20:47:40.965716 kernel: usbcore: registered new interface driver usbfs Nov 12 20:47:40.965736 kernel: usbcore: registered new interface driver hub Nov 12 20:47:40.965756 kernel: usbcore: registered new device driver usb Nov 12 20:47:40.965776 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:47:40.965795 kernel: GPT:9289727 != 125829119 Nov 12 20:47:40.965831 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:47:40.965848 kernel: GPT:9289727 != 125829119 Nov 12 20:47:40.965867 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:47:40.965886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:40.965906 kernel: libata version 3.00 loaded. Nov 12 20:47:40.965957 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 12 20:47:41.026783 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:47:41.026823 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Nov 12 20:47:41.027147 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 12 20:47:41.043731 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:47:41.043760 kernel: AES CTR mode by8 optimization enabled Nov 12 20:47:41.043777 kernel: scsi host1: ata_piix Nov 12 20:47:41.044057 kernel: scsi host2: ata_piix Nov 12 20:47:41.044248 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 12 20:47:41.044269 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 12 20:47:40.956916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:47:41.132261 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Nov 12 20:47:41.132300 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Nov 12 20:47:40.966582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:40.967676 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:40.969014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:40.969305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:40.972256 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:40.981150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:41.135741 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:47:41.137850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:41.158297 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:47:41.171223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:47:41.176508 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:47:41.177518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:47:41.194388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:47:41.199882 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:41.217613 disk-uuid[531]: Primary Header is updated. Nov 12 20:47:41.217613 disk-uuid[531]: Secondary Entries is updated. Nov 12 20:47:41.217613 disk-uuid[531]: Secondary Header is updated. Nov 12 20:47:41.233807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:41.241016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:41.284966 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 12 20:47:41.297424 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 12 20:47:41.297861 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 12 20:47:41.298182 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 12 20:47:41.298374 kernel: hub 1-0:1.0: USB hub found Nov 12 20:47:41.298595 kernel: hub 1-0:1.0: 2 ports detected Nov 12 20:47:41.288007 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:42.255131 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:42.256826 disk-uuid[541]: The operation has completed successfully. Nov 12 20:47:42.311216 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:47:42.311398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:47:42.333421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:47:42.349203 sh[562]: Success Nov 12 20:47:42.369984 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:47:42.468135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:47:42.482185 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:47:42.490645 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:47:42.547154 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:47:42.547293 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:42.547335 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:47:42.550121 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:47:42.551707 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:47:42.570524 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:47:42.573250 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:47:42.598821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:47:42.602261 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:47:42.642046 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:42.642140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:42.642160 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:42.660402 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:42.685751 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:47:42.687686 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:42.700323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:47:42.711219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:47:42.921572 ignition[650]: Ignition 2.19.0 Nov 12 20:47:42.921593 ignition[650]: Stage: fetch-offline Nov 12 20:47:42.921691 ignition[650]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:42.921707 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:42.925335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:47:42.921877 ignition[650]: parsed url from cmdline: "" Nov 12 20:47:42.921887 ignition[650]: no config URL provided Nov 12 20:47:42.921896 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:47:42.921909 ignition[650]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:47:42.921917 ignition[650]: failed to fetch config: resource requires networking Nov 12 20:47:42.922383 ignition[650]: Ignition finished successfully Nov 12 20:47:42.939358 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:47:42.947289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:47:42.981555 systemd-networkd[752]: lo: Link UP Nov 12 20:47:42.981570 systemd-networkd[752]: lo: Gained carrier Nov 12 20:47:42.984998 systemd-networkd[752]: Enumeration completed Nov 12 20:47:42.985163 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:47:42.986619 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:47:42.986623 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 12 20:47:42.987804 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:42.987809 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:47:42.989262 systemd-networkd[752]: eth0: Link UP Nov 12 20:47:42.989267 systemd-networkd[752]: eth0: Gained carrier Nov 12 20:47:42.989279 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:47:42.989733 systemd[1]: Reached target network.target - Network. Nov 12 20:47:42.994048 systemd-networkd[752]: eth1: Link UP Nov 12 20:47:42.994054 systemd-networkd[752]: eth1: Gained carrier Nov 12 20:47:42.994074 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:42.997331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:47:43.012428 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Nov 12 20:47:43.019068 systemd-networkd[752]: eth0: DHCPv4 address 143.198.78.43/20, gateway 143.198.64.1 acquired from 169.254.169.253 Nov 12 20:47:43.033047 ignition[754]: Ignition 2.19.0 Nov 12 20:47:43.033069 ignition[754]: Stage: fetch Nov 12 20:47:43.033371 ignition[754]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:43.033388 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:43.033505 ignition[754]: parsed url from cmdline: "" Nov 12 20:47:43.033509 ignition[754]: no config URL provided Nov 12 20:47:43.033515 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:47:43.033524 ignition[754]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:47:43.033544 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 12 20:47:43.056271 ignition[754]: GET result: OK Nov 12 20:47:43.057307 ignition[754]: parsing config with SHA512: e16bd05042a614ac93237c852787da5d8e6919bf3c2603a3a7e97e9baa359f411eedd62f83e75fb98c44ad061b08bc84222bd851db4545533ae6f448ad98b808 Nov 12 20:47:43.063138 unknown[754]: fetched base config from "system" Nov 12 20:47:43.063153 unknown[754]: fetched base config from "system" Nov 12 20:47:43.064010 ignition[754]: fetch: fetch complete Nov 12 20:47:43.063160 unknown[754]: fetched user config from "digitalocean" Nov 12 20:47:43.064016 ignition[754]: fetch: fetch passed Nov 12 20:47:43.067274 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:47:43.064087 ignition[754]: Ignition finished successfully Nov 12 20:47:43.078286 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:47:43.112629 ignition[761]: Ignition 2.19.0 Nov 12 20:47:43.112650 ignition[761]: Stage: kargs Nov 12 20:47:43.112973 ignition[761]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:43.112991 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:43.117443 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:47:43.114550 ignition[761]: kargs: kargs passed Nov 12 20:47:43.114660 ignition[761]: Ignition finished successfully Nov 12 20:47:43.128305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:47:43.156697 ignition[767]: Ignition 2.19.0 Nov 12 20:47:43.156723 ignition[767]: Stage: disks Nov 12 20:47:43.157144 ignition[767]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:43.157164 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:43.162275 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:47:43.159009 ignition[767]: disks: disks passed Nov 12 20:47:43.170356 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:47:43.159102 ignition[767]: Ignition finished successfully Nov 12 20:47:43.171658 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:47:43.173373 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:47:43.174515 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:47:43.176007 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:47:43.184469 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:47:43.211840 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:47:43.217870 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:47:43.226193 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:47:43.397190 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:47:43.398725 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:47:43.400429 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:47:43.414317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:47:43.419370 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:47:43.425269 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 12 20:47:43.438980 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Nov 12 20:47:43.443487 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:47:43.445652 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:43.449365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:43.449449 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:43.451151 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:47:43.451247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:47:43.463004 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:43.464889 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:47:43.477442 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:47:43.484593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:47:43.583032 coreos-metadata[785]: Nov 12 20:47:43.581 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:43.588035 coreos-metadata[786]: Nov 12 20:47:43.587 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:43.595624 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:47:43.598979 coreos-metadata[785]: Nov 12 20:47:43.598 INFO Fetch successful Nov 12 20:47:43.605948 coreos-metadata[786]: Nov 12 20:47:43.604 INFO Fetch successful Nov 12 20:47:43.609197 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 12 20:47:43.610715 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 12 20:47:43.615115 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:47:43.616904 coreos-metadata[786]: Nov 12 20:47:43.616 INFO wrote hostname ci-4081.2.0-d-ef96bd2a01 to /sysroot/etc/hostname Nov 12 20:47:43.620243 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:47:43.624526 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:47:43.632314 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:47:43.824481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:47:43.854200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:47:43.858257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:47:43.873020 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:47:43.878005 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:43.953230 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:47:43.971677 ignition[903]: INFO : Ignition 2.19.0 Nov 12 20:47:43.973604 ignition[903]: INFO : Stage: mount Nov 12 20:47:43.975914 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:43.975914 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:43.978509 ignition[903]: INFO : mount: mount passed Nov 12 20:47:43.978509 ignition[903]: INFO : Ignition finished successfully Nov 12 20:47:43.981903 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:47:43.992220 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:47:44.029296 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:47:44.050975 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (916) Nov 12 20:47:44.054071 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:44.054204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:44.056396 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:44.066981 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:44.073798 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:47:44.147040 ignition[933]: INFO : Ignition 2.19.0 Nov 12 20:47:44.147040 ignition[933]: INFO : Stage: files Nov 12 20:47:44.147040 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:44.147040 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:44.152382 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:47:44.154191 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:47:44.154191 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:47:44.159380 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:47:44.161908 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:47:44.164339 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:47:44.162538 unknown[933]: wrote ssh authorized keys file for user: core Nov 12 20:47:44.166546 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:47:44.166546 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:47:44.210164 systemd-networkd[752]: eth0: Gained IPv6LL Nov 12 20:47:44.222772 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:47:44.309964 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:47:44.322716 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:47:44.322716 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:47:44.322716 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:47:44.322716 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:47:44.322716 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:47:44.827208 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:47:44.915589 systemd-networkd[752]: eth1: Gained IPv6LL Nov 12 20:47:45.228210 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:47:45.228210 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:47:45.232899 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:47:45.232899 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:47:45.232899 ignition[933]: INFO : files: files passed Nov 12 20:47:45.232899 ignition[933]: INFO : Ignition finished successfully Nov 12 20:47:45.233348 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:47:45.249389 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:47:45.257203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:47:45.263337 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:47:45.263535 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:47:45.280738 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:45.280738 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:45.284420 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:45.285075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:47:45.287336 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:47:45.301403 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:47:45.349234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:47:45.349411 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:47:45.352076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:47:45.352947 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:47:45.354519 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:47:45.362378 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:47:45.389171 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:47:45.399277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:47:45.418981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:45.421750 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:45.422917 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:47:45.425132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:47:45.425363 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:47:45.428097 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:47:45.429482 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:47:45.430627 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:47:45.434327 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:47:45.435739 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:47:45.440508 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:47:45.442008 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:47:45.443377 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:47:45.445412 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:47:45.447810 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:47:45.450840 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:47:45.451153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:47:45.452877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:45.454172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:45.455400 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:47:45.456066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:45.461187 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:47:45.461480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:47:45.463212 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:47:45.463429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:47:45.465866 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:47:45.466087 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:47:45.467826 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:47:45.468056 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:47:45.478594 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:47:45.481582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:47:45.483594 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:47:45.483982 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:45.489530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:47:45.489717 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:47:45.505541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:47:45.507107 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:47:45.527709 ignition[985]: INFO : Ignition 2.19.0 Nov 12 20:47:45.529563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:47:45.535432 ignition[985]: INFO : Stage: umount Nov 12 20:47:45.535432 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:45.535432 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:45.535432 ignition[985]: INFO : umount: umount passed Nov 12 20:47:45.535432 ignition[985]: INFO : Ignition finished successfully Nov 12 20:47:45.538685 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:47:45.538895 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:47:45.540577 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:47:45.540738 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:47:45.559149 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:47:45.559315 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:47:45.560183 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:47:45.560262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:47:45.560999 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:47:45.561084 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:47:45.561805 systemd[1]: Stopped target network.target - Network. Nov 12 20:47:45.568188 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:47:45.568342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:47:45.572462 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:47:45.573171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:47:45.573290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:45.577322 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:47:45.580211 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:47:45.580958 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:47:45.581048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:47:45.581859 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:47:45.581955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:47:45.584156 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:47:45.584258 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:47:45.588265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:47:45.588368 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:47:45.589074 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:47:45.589142 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:47:45.595294 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:47:45.596681 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:47:45.605221 systemd-networkd[752]: eth1: DHCPv6 lease lost Nov 12 20:47:45.609439 systemd-networkd[752]: eth0: DHCPv6 lease lost Nov 12 20:47:45.611648 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:47:45.611846 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:47:45.615351 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:47:45.615536 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:47:45.617854 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:47:45.618346 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:45.625380 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:47:45.629393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:47:45.629521 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:47:45.631111 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:47:45.631216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:45.637483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:47:45.637590 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:45.642518 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:47:45.642640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:45.643662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:45.662591 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:47:45.662897 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:45.667021 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:47:45.667264 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:47:45.670684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:47:45.670816 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:45.671872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:47:45.671971 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:45.674421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:47:45.674563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:47:45.677415 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:47:45.677528 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:47:45.679291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:47:45.679402 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:45.687424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:47:45.688418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:47:45.688542 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:45.690303 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:47:45.690391 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:45.692026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:47:45.692113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:45.695017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:45.695094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:45.707552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:47:45.707719 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:47:45.710719 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:47:45.719625 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:47:45.746562 systemd[1]: Switching root. Nov 12 20:47:45.832221 systemd-journald[183]: Journal stopped Nov 12 20:47:47.769631 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:47:47.769729 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:47:47.769755 kernel: SELinux: policy capability open_perms=1 Nov 12 20:47:47.769770 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:47:47.769787 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:47:47.769805 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:47:47.769829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:47:47.769844 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:47:47.769860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:47:47.769882 kernel: audit: type=1403 audit(1731444466.140:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:47:47.769903 systemd[1]: Successfully loaded SELinux policy in 53.543ms. Nov 12 20:47:47.770242 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.563ms. Nov 12 20:47:47.770264 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:47:47.770524 systemd[1]: Detected virtualization kvm. Nov 12 20:47:47.770554 systemd[1]: Detected architecture x86-64. Nov 12 20:47:47.770571 systemd[1]: Detected first boot. Nov 12 20:47:47.770589 systemd[1]: Hostname set to . Nov 12 20:47:47.770607 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:47:47.770623 zram_generator::config[1028]: No configuration found. Nov 12 20:47:47.770644 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:47:47.770661 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:47:47.770685 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:47:47.770707 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:47:47.770729 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:47:47.770746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:47:47.770763 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:47:47.770782 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:47:47.770799 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:47:47.770825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:47:47.770844 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:47:47.770863 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:47:47.770889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:47.770952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:47.770971 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:47:47.770989 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:47:47.771007 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:47:47.771025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:47:47.771041 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:47:47.771058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:47.771076 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:47:47.771101 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:47:47.771120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:47:47.771138 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:47:47.771157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:47.771175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:47:47.771194 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:47:47.771217 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:47:47.771235 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:47:47.771251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:47:47.771269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:47.771287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:47.771305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:47.771321 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:47:47.771338 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:47:47.771356 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:47:47.771377 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:47:47.771397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:47.771415 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:47:47.771443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:47:47.771462 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:47:47.771481 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:47:47.771498 systemd[1]: Reached target machines.target - Containers. Nov 12 20:47:47.771514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:47:47.771531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:47.771553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:47:47.771571 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:47:47.771588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:47.771606 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:47:47.771624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:47.771643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:47:47.771662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:47.771681 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:47:47.771713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:47:47.771731 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:47:47.771752 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:47:47.771774 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:47:47.771794 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:47:47.771813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:47:47.771834 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:47:47.771857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:47:47.771875 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:47:47.771897 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:47:47.771917 systemd[1]: Stopped verity-setup.service. Nov 12 20:47:47.771967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:47.771980 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:47:47.771992 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:47:47.772004 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:47:47.772016 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:47:47.772031 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:47:47.772042 kernel: ACPI: bus type drm_connector registered Nov 12 20:47:47.772054 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:47:47.772066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:47.772081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:47:47.772092 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:47:47.772105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:47:47.772118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:47.772130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:47.772141 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:47:47.772153 kernel: fuse: init (API version 7.39) Nov 12 20:47:47.772163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:47:47.772301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:47.772318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:47.772386 systemd-journald[1104]: Collecting audit messages is disabled. Nov 12 20:47:47.772412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:47:47.772431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:47:47.772449 systemd-journald[1104]: Journal started Nov 12 20:47:47.772500 systemd-journald[1104]: Runtime Journal (/run/log/journal/961b14bbfd4e428284b0864e906debf8) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:47:47.224067 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:47:47.262110 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:47:47.262858 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:47:47.775740 kernel: loop: module loaded Nov 12 20:47:47.775814 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:47:47.780110 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:47.780384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:47.782316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:47.783937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:47:47.785360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:47:47.809208 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:47:47.818106 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:47:47.829081 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:47:47.831091 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:47:47.831159 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:47:47.834803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:47:47.852295 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:47:47.857804 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:47:47.858856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:47.869207 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:47:47.872168 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:47:47.872996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:47.877170 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:47:47.878140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:47.882301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:47:47.888509 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:47:47.894267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:47:47.900164 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:47:47.903367 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:47:47.904455 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:47:47.973113 systemd-journald[1104]: Time spent on flushing to /var/log/journal/961b14bbfd4e428284b0864e906debf8 is 150.413ms for 988 entries. Nov 12 20:47:47.973113 systemd-journald[1104]: System Journal (/var/log/journal/961b14bbfd4e428284b0864e906debf8) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:47:48.155298 systemd-journald[1104]: Received client request to flush runtime journal. Nov 12 20:47:48.155428 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:47:48.155465 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:47:47.975091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:47.992341 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:47:48.037047 udevadm[1154]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:47:48.041687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:48.065023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:47:48.066243 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:47:48.078444 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:47:48.126517 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Nov 12 20:47:48.126540 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Nov 12 20:47:48.162789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:48.179200 kernel: loop1: detected capacity change from 0 to 8 Nov 12 20:47:48.169439 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:47:48.191373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:47:48.198474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:47:48.200625 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:47:48.223797 kernel: loop2: detected capacity change from 0 to 140768 Nov 12 20:47:48.277442 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:47:48.288887 kernel: loop3: detected capacity change from 0 to 211296 Nov 12 20:47:48.304256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:47:48.365341 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:47:48.402679 kernel: loop5: detected capacity change from 0 to 8 Nov 12 20:47:48.412972 kernel: loop6: detected capacity change from 0 to 140768 Nov 12 20:47:48.421598 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 12 20:47:48.422618 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 12 20:47:48.442374 kernel: loop7: detected capacity change from 0 to 211296 Nov 12 20:47:48.454672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:48.467420 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 12 20:47:48.468977 (sd-merge)[1175]: Merged extensions into '/usr'. Nov 12 20:47:48.488171 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:47:48.488201 systemd[1]: Reloading... Nov 12 20:47:48.575964 zram_generator::config[1198]: No configuration found. Nov 12 20:47:49.004797 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:47:49.087043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:49.150609 systemd[1]: Reloading finished in 661 ms. Nov 12 20:47:49.205191 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:47:49.206754 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:47:49.218444 systemd[1]: Starting ensure-sysext.service... Nov 12 20:47:49.225341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:47:49.253829 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:47:49.253851 systemd[1]: Reloading... Nov 12 20:47:49.292028 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:47:49.294599 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:47:49.298273 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:47:49.298730 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 12 20:47:49.298821 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 12 20:47:49.307531 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:49.307551 systemd-tmpfiles[1246]: Skipping /boot Nov 12 20:47:49.351721 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:49.351742 systemd-tmpfiles[1246]: Skipping /boot Nov 12 20:47:49.427961 zram_generator::config[1273]: No configuration found. Nov 12 20:47:49.634129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:49.706600 systemd[1]: Reloading finished in 452 ms. Nov 12 20:47:49.725934 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:47:49.736991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:49.753374 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:49.764358 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:47:49.769282 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:47:49.785330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:47:49.800323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:49.813301 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:47:49.833411 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:47:49.838132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:49.838428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:49.848010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:49.859873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:49.873345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:49.874246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:49.874419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:49.876816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:47:49.879339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:49.887100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:49.897584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:49.899343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:49.908381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:49.911173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:49.926108 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:47:49.927487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:49.930170 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:47:49.937003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:49.937250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:49.943749 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:49.944902 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Nov 12 20:47:49.945913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:49.951354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:49.954371 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:49.963909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:47:49.966196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:49.967224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:49.968997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:47:49.978893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:49.979284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:49.988577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:49.996079 augenrules[1354]: No rules Nov 12 20:47:49.998194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:47:50.006178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:50.007093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:50.007171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:50.007219 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:50.007242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:50.007606 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:47:50.010325 systemd[1]: Finished ensure-sysext.service. Nov 12 20:47:50.011389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:50.030290 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:47:50.042500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:50.046067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:50.049849 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:47:50.052044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:47:50.053023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:50.054237 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:50.054447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:50.068378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:47:50.070383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:50.200745 systemd-resolved[1322]: Positive Trust Anchors: Nov 12 20:47:50.203118 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:47:50.203350 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:47:50.209878 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:47:50.211503 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:47:50.219850 systemd-resolved[1322]: Using system hostname 'ci-4081.2.0-d-ef96bd2a01'. Nov 12 20:47:50.223476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:47:50.226085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:50.237853 systemd-networkd[1378]: lo: Link UP Nov 12 20:47:50.237869 systemd-networkd[1378]: lo: Gained carrier Nov 12 20:47:50.242780 systemd-networkd[1378]: Enumeration completed Nov 12 20:47:50.242917 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:47:50.244059 systemd[1]: Reached target network.target - Network. Nov 12 20:47:50.255354 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:47:50.283975 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1371) Nov 12 20:47:50.308965 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1371) Nov 12 20:47:50.311160 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 12 20:47:50.311780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:50.311945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:50.322270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:50.326992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:50.337282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:50.338235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:50.338291 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:50.338312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:50.349288 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:47:50.352036 systemd-networkd[1378]: eth0: Configuring with /run/systemd/network/10-d6:3d:43:39:f5:bd.network. Nov 12 20:47:50.354744 systemd-networkd[1378]: eth0: Link UP Nov 12 20:47:50.354757 systemd-networkd[1378]: eth0: Gained carrier Nov 12 20:47:50.363334 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:50.367763 kernel: ISO 9660 Extensions: RRIP_1991A Nov 12 20:47:50.369406 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 12 20:47:50.374616 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:50.374993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:50.377949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Nov 12 20:47:50.385418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:50.385715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:50.405515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:50.406567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:50.420278 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:50.420409 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:50.422103 systemd-networkd[1378]: eth1: Configuring with /run/systemd/network/10-6e:38:23:85:a3:8c.network. Nov 12 20:47:50.424265 systemd-networkd[1378]: eth1: Link UP Nov 12 20:47:50.424279 systemd-networkd[1378]: eth1: Gained carrier Nov 12 20:47:50.424649 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:50.428528 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:50.430377 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:50.453800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:47:50.464579 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:47:50.500649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:47:50.501999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:47:50.512132 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:47:50.514675 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 12 20:47:50.592976 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:47:50.602520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:50.620956 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:47:50.652018 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 12 20:47:50.652123 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 12 20:47:50.663289 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:47:50.663400 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 12 20:47:50.663421 kernel: [drm] features: -context_init Nov 12 20:47:50.668974 kernel: [drm] number of scanouts: 1 Nov 12 20:47:50.669092 kernel: [drm] number of cap sets: 0 Nov 12 20:47:50.673123 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 12 20:47:50.686272 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 12 20:47:50.686405 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:47:50.701033 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 12 20:47:50.706753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:50.708594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:50.725462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:50.734914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:50.736362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:50.748311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:50.872446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:50.944193 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:47:50.981992 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:47:50.987424 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:47:51.017033 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:51.056223 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:47:51.057485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:51.058597 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:47:51.059142 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:47:51.059481 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:47:51.060069 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:47:51.060654 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:47:51.061092 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:47:51.061495 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:47:51.061695 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:47:51.062008 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:47:51.063704 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:47:51.076029 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:47:51.100874 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:47:51.107311 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:47:51.121431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:47:51.122394 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:47:51.123781 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:47:51.127222 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:51.127272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:51.132967 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:51.137269 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:47:51.154196 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:47:51.178382 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:47:51.184464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:47:51.198288 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:47:51.200716 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:47:51.207285 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:47:51.224072 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:47:51.227891 jq[1442]: false Nov 12 20:47:51.236264 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:47:51.242392 coreos-metadata[1438]: Nov 12 20:47:51.242 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:51.248210 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:47:51.256370 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:47:51.265013 coreos-metadata[1438]: Nov 12 20:47:51.258 INFO Fetch successful Nov 12 20:47:51.259158 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:47:51.259791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:47:51.269220 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:47:51.273437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:47:51.280064 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:47:51.290976 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:47:51.292078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:47:51.313637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:47:51.313916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:47:51.338546 dbus-daemon[1439]: [system] SELinux support is enabled Nov 12 20:47:51.342371 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:47:51.365732 extend-filesystems[1443]: Found loop4 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found loop5 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found loop6 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found loop7 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda1 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda2 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda3 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found usr Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda4 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda6 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda7 Nov 12 20:47:51.365732 extend-filesystems[1443]: Found vda9 Nov 12 20:47:51.365732 extend-filesystems[1443]: Checking size of /dev/vda9 Nov 12 20:47:51.388464 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:47:51.515338 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 12 20:47:51.515519 extend-filesystems[1443]: Resized partition /dev/vda9 Nov 12 20:47:51.521845 tar[1453]: linux-amd64/helm Nov 12 20:47:51.528740 jq[1451]: true Nov 12 20:47:51.528869 update_engine[1450]: I20241112 20:47:51.451216 1450 main.cc:92] Flatcar Update Engine starting Nov 12 20:47:51.528869 update_engine[1450]: I20241112 20:47:51.491308 1450 update_check_scheduler.cc:74] Next update check in 9m46s Nov 12 20:47:51.390857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:47:51.531255 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:47:51.393126 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:47:51.546382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Nov 12 20:47:51.409976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:47:51.410588 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 12 20:47:51.551133 jq[1473]: true Nov 12 20:47:51.410622 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:47:51.460363 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:47:51.460667 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:47:51.483796 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:47:51.502166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:47:51.508595 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:47:51.511210 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:47:51.698594 systemd-logind[1449]: New seat seat0. Nov 12 20:47:51.706494 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:47:51.706535 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:47:51.707118 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:47:51.762317 systemd-networkd[1378]: eth0: Gained IPv6LL Nov 12 20:47:51.762947 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:51.792708 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:47:51.798148 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:47:51.813464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:51.824562 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:47:51.908239 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:51.912500 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:47:51.928201 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 12 20:47:51.935430 systemd[1]: Starting sshkeys.service... Nov 12 20:47:51.977048 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:47:51.987096 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:47:52.008712 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:47:52.032702 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:47:52.058978 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:47:52.058978 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 12 20:47:52.058978 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 12 20:47:52.075767 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Nov 12 20:47:52.075767 extend-filesystems[1443]: Found vdb Nov 12 20:47:52.059607 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:47:52.064698 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:47:52.236279 coreos-metadata[1524]: Nov 12 20:47:52.236 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:52.255060 containerd[1455]: time="2024-11-12T20:47:52.254889751Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:47:52.260422 coreos-metadata[1524]: Nov 12 20:47:52.260 INFO Fetch successful Nov 12 20:47:52.287356 unknown[1524]: wrote ssh authorized keys file for user: core Nov 12 20:47:52.340031 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:47:52.356743 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:52.356956 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:47:52.367455 systemd[1]: Finished sshkeys.service. Nov 12 20:47:52.395421 containerd[1455]: time="2024-11-12T20:47:52.395331352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.400322 containerd[1455]: time="2024-11-12T20:47:52.400128647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:52.400322 containerd[1455]: time="2024-11-12T20:47:52.400313692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:47:52.400471 containerd[1455]: time="2024-11-12T20:47:52.400340371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:47:52.403048 containerd[1455]: time="2024-11-12T20:47:52.401037611Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:47:52.403048 containerd[1455]: time="2024-11-12T20:47:52.402257325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.403048 containerd[1455]: time="2024-11-12T20:47:52.402459970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:52.403048 containerd[1455]: time="2024-11-12T20:47:52.402498715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.403119 systemd-networkd[1378]: eth1: Gained IPv6LL Nov 12 20:47:52.403739 containerd[1455]: time="2024-11-12T20:47:52.403692610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:52.403835 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.403918034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404002011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404025958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404191613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404529353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404800860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.404829112Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.405082632Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:47:52.405737 containerd[1455]: time="2024-11-12T20:47:52.405175201Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:47:52.419031 containerd[1455]: time="2024-11-12T20:47:52.418966712Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:47:52.419946 containerd[1455]: time="2024-11-12T20:47:52.419448622Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:47:52.420505 containerd[1455]: time="2024-11-12T20:47:52.419904757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:47:52.420771 containerd[1455]: time="2024-11-12T20:47:52.420697970Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421123494Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421383341Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421780113Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421961111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421980547Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.421994553Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422027195Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422043573Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422056590Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422072552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422087952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422102007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422114294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423457 containerd[1455]: time="2024-11-12T20:47:52.422126492Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422150614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422167885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422183174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422197458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422210817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422250591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422266230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422280580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422295853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422309647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422321737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422334549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422347350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422363043Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:47:52.423988 containerd[1455]: time="2024-11-12T20:47:52.422412927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422437066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422453730Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422547202Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422572791Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422585247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422597971Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422607839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422621188Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422632540Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:47:52.424473 containerd[1455]: time="2024-11-12T20:47:52.422644122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:47:52.429080 containerd[1455]: time="2024-11-12T20:47:52.428246916Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:47:52.430025 containerd[1455]: time="2024-11-12T20:47:52.428705794Z" level=info msg="Connect containerd service" Nov 12 20:47:52.430576 containerd[1455]: time="2024-11-12T20:47:52.430288752Z" level=info msg="using legacy CRI server" Nov 12 20:47:52.430576 containerd[1455]: time="2024-11-12T20:47:52.430494487Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:47:52.434161 containerd[1455]: time="2024-11-12T20:47:52.433475010Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:47:52.440093 containerd[1455]: time="2024-11-12T20:47:52.440022302Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:47:52.441649 containerd[1455]: time="2024-11-12T20:47:52.441538485Z" level=info msg="Start subscribing containerd event" Nov 12 20:47:52.441884 containerd[1455]: time="2024-11-12T20:47:52.441857038Z" level=info msg="Start recovering state" Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442074230Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442132147Z" level=info msg="Start event monitor" Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442164337Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442227432Z" level=info msg="Start snapshots syncer" Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442251104Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:47:52.443186 containerd[1455]: time="2024-11-12T20:47:52.442263713Z" level=info msg="Start streaming server" Nov 12 20:47:52.444092 containerd[1455]: time="2024-11-12T20:47:52.444054139Z" level=info msg="containerd successfully booted in 0.200805s" Nov 12 20:47:52.444266 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:47:52.472628 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:47:52.487518 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:47:52.531759 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:47:52.532624 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:47:52.547574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:47:52.606253 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:47:52.618134 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:47:52.626472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:47:52.627304 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:47:53.005107 tar[1453]: linux-amd64/LICENSE Nov 12 20:47:53.006369 tar[1453]: linux-amd64/README.md Nov 12 20:47:53.026597 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:47:53.569205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:53.572899 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:47:53.577169 systemd[1]: Startup finished in 1.821s (kernel) + 7.239s (initrd) + 7.488s (userspace) = 16.549s. Nov 12 20:47:53.581773 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:47:54.877346 kubelet[1561]: E1112 20:47:54.877182 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:47:54.881246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:47:54.881509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:47:54.882308 systemd[1]: kubelet.service: Consumed 1.701s CPU time. Nov 12 20:47:55.231149 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:47:55.239659 systemd[1]: Started sshd@0-143.198.78.43:22-139.178.68.195:43852.service - OpenSSH per-connection server daemon (139.178.68.195:43852). Nov 12 20:47:55.378010 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 43852 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:55.386254 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:55.415142 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:47:55.423557 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:47:55.431020 systemd-logind[1449]: New session 1 of user core. Nov 12 20:47:55.453083 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:47:55.462821 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:47:55.477616 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:47:55.716003 systemd[1578]: Queued start job for default target default.target. Nov 12 20:47:55.737729 systemd[1578]: Created slice app.slice - User Application Slice. Nov 12 20:47:55.738578 systemd[1578]: Reached target paths.target - Paths. Nov 12 20:47:55.738617 systemd[1578]: Reached target timers.target - Timers. Nov 12 20:47:55.741659 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:47:55.790434 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:47:55.790651 systemd[1578]: Reached target sockets.target - Sockets. Nov 12 20:47:55.790675 systemd[1578]: Reached target basic.target - Basic System. Nov 12 20:47:55.790756 systemd[1578]: Reached target default.target - Main User Target. Nov 12 20:47:55.790802 systemd[1578]: Startup finished in 301ms. Nov 12 20:47:55.791307 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:47:55.796158 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:47:55.905332 systemd[1]: Started sshd@1-143.198.78.43:22-139.178.68.195:43854.service - OpenSSH per-connection server daemon (139.178.68.195:43854). Nov 12 20:47:56.006375 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 43854 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.010337 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.034007 systemd-logind[1449]: New session 2 of user core. Nov 12 20:47:56.045398 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:47:56.128583 sshd[1589]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.138157 systemd[1]: sshd@1-143.198.78.43:22-139.178.68.195:43854.service: Deactivated successfully. Nov 12 20:47:56.140774 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:47:56.146046 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:47:56.153653 systemd[1]: Started sshd@2-143.198.78.43:22-139.178.68.195:43866.service - OpenSSH per-connection server daemon (139.178.68.195:43866). Nov 12 20:47:56.160378 systemd-logind[1449]: Removed session 2. Nov 12 20:47:56.236484 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 43866 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.239059 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.256868 systemd-logind[1449]: New session 3 of user core. Nov 12 20:47:56.266656 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:47:56.343589 sshd[1596]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.362080 systemd[1]: sshd@2-143.198.78.43:22-139.178.68.195:43866.service: Deactivated successfully. Nov 12 20:47:56.365191 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:47:56.369280 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:47:56.375857 systemd[1]: Started sshd@3-143.198.78.43:22-139.178.68.195:43880.service - OpenSSH per-connection server daemon (139.178.68.195:43880). Nov 12 20:47:56.378503 systemd-logind[1449]: Removed session 3. Nov 12 20:47:56.440245 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 43880 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.442707 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.451336 systemd-logind[1449]: New session 4 of user core. Nov 12 20:47:56.460349 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:47:56.579395 sshd[1603]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.593068 systemd[1]: sshd@3-143.198.78.43:22-139.178.68.195:43880.service: Deactivated successfully. Nov 12 20:47:56.595628 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:47:56.598252 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:47:56.617575 systemd[1]: Started sshd@4-143.198.78.43:22-139.178.68.195:43882.service - OpenSSH per-connection server daemon (139.178.68.195:43882). Nov 12 20:47:56.634252 systemd-logind[1449]: Removed session 4. Nov 12 20:47:56.698972 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 43882 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.703820 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.714792 systemd-logind[1449]: New session 5 of user core. Nov 12 20:47:56.722311 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:47:56.819404 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:47:56.820405 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:56.866017 sudo[1613]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:56.873971 sshd[1610]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.891889 systemd[1]: sshd@4-143.198.78.43:22-139.178.68.195:43882.service: Deactivated successfully. Nov 12 20:47:56.895108 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:47:56.897895 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:47:56.910796 systemd[1]: Started sshd@5-143.198.78.43:22-139.178.68.195:43898.service - OpenSSH per-connection server daemon (139.178.68.195:43898). Nov 12 20:47:56.914259 systemd-logind[1449]: Removed session 5. Nov 12 20:47:57.001381 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 43898 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:57.005409 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:57.023231 systemd-logind[1449]: New session 6 of user core. Nov 12 20:47:57.027366 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:47:57.113609 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:47:57.114152 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:57.121802 sudo[1622]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:57.133758 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:47:57.138320 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:57.170358 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:57.178388 auditctl[1625]: No rules Nov 12 20:47:57.178986 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:47:57.179313 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:57.187563 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:57.267022 augenrules[1643]: No rules Nov 12 20:47:57.268658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:57.272402 sudo[1621]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:57.282035 sshd[1618]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:57.298646 systemd[1]: sshd@5-143.198.78.43:22-139.178.68.195:43898.service: Deactivated successfully. Nov 12 20:47:57.306323 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:47:57.309206 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:47:57.315548 systemd[1]: Started sshd@6-143.198.78.43:22-139.178.68.195:43914.service - OpenSSH per-connection server daemon (139.178.68.195:43914). Nov 12 20:47:57.318213 systemd-logind[1449]: Removed session 6. Nov 12 20:47:57.377046 sshd[1651]: Accepted publickey for core from 139.178.68.195 port 43914 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:57.379065 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:57.391526 systemd-logind[1449]: New session 7 of user core. Nov 12 20:47:57.395291 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:47:57.473913 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:47:57.474782 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:58.430846 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:47:58.437118 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:47:59.503859 dockerd[1671]: time="2024-11-12T20:47:59.502430029Z" level=info msg="Starting up" Nov 12 20:47:59.758291 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport235428160-merged.mount: Deactivated successfully. Nov 12 20:47:59.837750 dockerd[1671]: time="2024-11-12T20:47:59.834130399Z" level=info msg="Loading containers: start." Nov 12 20:48:00.208722 kernel: Initializing XFRM netlink socket Nov 12 20:48:00.317741 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Nov 12 20:48:01.601879 systemd-resolved[1322]: Clock change detected. Flushing caches. Nov 12 20:48:01.603090 systemd-timesyncd[1367]: Contacted time server 208.113.130.146:123 (2.flatcar.pool.ntp.org). Nov 12 20:48:01.604645 systemd-timesyncd[1367]: Initial clock synchronization to Tue 2024-11-12 20:48:01.601495 UTC. Nov 12 20:48:01.720914 systemd-networkd[1378]: docker0: Link UP Nov 12 20:48:01.795863 dockerd[1671]: time="2024-11-12T20:48:01.788189962Z" level=info msg="Loading containers: done." Nov 12 20:48:01.847559 dockerd[1671]: time="2024-11-12T20:48:01.846966014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:48:01.847559 dockerd[1671]: time="2024-11-12T20:48:01.847123973Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:48:01.847559 dockerd[1671]: time="2024-11-12T20:48:01.847280690Z" level=info msg="Daemon has completed initialization" Nov 12 20:48:01.971528 dockerd[1671]: time="2024-11-12T20:48:01.969931496Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:48:01.970950 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:48:04.031108 containerd[1455]: time="2024-11-12T20:48:04.030123336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:48:04.838246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254239740.mount: Deactivated successfully. Nov 12 20:48:06.200739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:48:06.209287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:06.439244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:06.443100 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:06.566277 kubelet[1883]: E1112 20:48:06.565560 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:06.573912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:06.574075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:07.666905 containerd[1455]: time="2024-11-12T20:48:07.666800488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:07.668954 containerd[1455]: time="2024-11-12T20:48:07.668875968Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:48:07.670195 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 12 20:48:07.671254 containerd[1455]: time="2024-11-12T20:48:07.671195135Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:07.675554 containerd[1455]: time="2024-11-12T20:48:07.675454665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:07.677469 containerd[1455]: time="2024-11-12T20:48:07.677280140Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 3.647084255s" Nov 12 20:48:07.677718 containerd[1455]: time="2024-11-12T20:48:07.677687905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:48:07.726587 containerd[1455]: time="2024-11-12T20:48:07.726524382Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:48:10.602221 containerd[1455]: time="2024-11-12T20:48:10.601094165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:10.603929 containerd[1455]: time="2024-11-12T20:48:10.603800366Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:48:10.607403 containerd[1455]: time="2024-11-12T20:48:10.607313886Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:10.615066 containerd[1455]: time="2024-11-12T20:48:10.614960919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:10.620471 containerd[1455]: time="2024-11-12T20:48:10.620390427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.893788546s" Nov 12 20:48:10.620988 containerd[1455]: time="2024-11-12T20:48:10.620810750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:48:10.663238 containerd[1455]: time="2024-11-12T20:48:10.663172468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:48:10.742171 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 12 20:48:12.499038 containerd[1455]: time="2024-11-12T20:48:12.498938392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:12.501755 containerd[1455]: time="2024-11-12T20:48:12.501642286Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:48:12.505889 containerd[1455]: time="2024-11-12T20:48:12.503110551Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:12.513114 containerd[1455]: time="2024-11-12T20:48:12.513041477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:12.516090 containerd[1455]: time="2024-11-12T20:48:12.515757539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.850022325s" Nov 12 20:48:12.516090 containerd[1455]: time="2024-11-12T20:48:12.515854142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:48:12.574920 containerd[1455]: time="2024-11-12T20:48:12.574829206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:48:13.815247 systemd-resolved[1322]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 12 20:48:14.288138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425442653.mount: Deactivated successfully. Nov 12 20:48:15.099945 containerd[1455]: time="2024-11-12T20:48:15.098778507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:15.102771 containerd[1455]: time="2024-11-12T20:48:15.102676641Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:48:15.106354 containerd[1455]: time="2024-11-12T20:48:15.106050123Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:15.111748 containerd[1455]: time="2024-11-12T20:48:15.111667095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:15.112897 containerd[1455]: time="2024-11-12T20:48:15.112767500Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.537847814s" Nov 12 20:48:15.112897 containerd[1455]: time="2024-11-12T20:48:15.112820795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:48:15.181999 containerd[1455]: time="2024-11-12T20:48:15.181876080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:48:15.808464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156266238.mount: Deactivated successfully. Nov 12 20:48:16.702167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:48:16.712479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:16.950386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:16.962832 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:17.087403 kubelet[1978]: E1112 20:48:17.087120 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:17.091630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:17.091991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:17.561540 containerd[1455]: time="2024-11-12T20:48:17.561427910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:17.563374 containerd[1455]: time="2024-11-12T20:48:17.563271382Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:48:17.564706 containerd[1455]: time="2024-11-12T20:48:17.564611174Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:17.569049 containerd[1455]: time="2024-11-12T20:48:17.568951700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:17.570991 containerd[1455]: time="2024-11-12T20:48:17.570921130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.388980063s" Nov 12 20:48:17.570991 containerd[1455]: time="2024-11-12T20:48:17.570989858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:48:17.617109 containerd[1455]: time="2024-11-12T20:48:17.617033862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:48:18.205051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676701474.mount: Deactivated successfully. Nov 12 20:48:18.237813 containerd[1455]: time="2024-11-12T20:48:18.236341838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.243063 containerd[1455]: time="2024-11-12T20:48:18.241521478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:48:18.250869 containerd[1455]: time="2024-11-12T20:48:18.250762122Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.258898 containerd[1455]: time="2024-11-12T20:48:18.258805857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 641.712899ms" Nov 12 20:48:18.258898 containerd[1455]: time="2024-11-12T20:48:18.258892248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:48:18.259207 containerd[1455]: time="2024-11-12T20:48:18.259055072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.300932 containerd[1455]: time="2024-11-12T20:48:18.300863053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:48:19.005891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870017493.mount: Deactivated successfully. Nov 12 20:48:23.534273 containerd[1455]: time="2024-11-12T20:48:23.533689089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.538099 containerd[1455]: time="2024-11-12T20:48:23.538008955Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:48:23.542902 containerd[1455]: time="2024-11-12T20:48:23.540467184Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.551811 containerd[1455]: time="2024-11-12T20:48:23.551735927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.554680 containerd[1455]: time="2024-11-12T20:48:23.554603523Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.253681335s" Nov 12 20:48:23.554680 containerd[1455]: time="2024-11-12T20:48:23.554670573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:48:27.200770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:48:27.244450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:27.560223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:27.572511 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:27.684345 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:27.690041 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:48:27.690876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:27.720584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:27.775515 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-7.scope)... Nov 12 20:48:27.775537 systemd[1]: Reloading... Nov 12 20:48:27.990566 zram_generator::config[2168]: No configuration found. Nov 12 20:48:28.262159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:28.390079 systemd[1]: Reloading finished in 614 ms. Nov 12 20:48:28.503127 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:48:28.503289 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:48:28.503951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:28.527741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:28.872436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:28.879351 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:48:28.970524 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:28.972887 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:48:28.972887 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:28.972887 kubelet[2220]: I1112 20:48:28.971414 2220 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:48:29.375552 kubelet[2220]: I1112 20:48:29.375491 2220 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:48:29.376882 kubelet[2220]: I1112 20:48:29.375790 2220 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:48:29.376882 kubelet[2220]: I1112 20:48:29.376227 2220 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:48:29.415243 kubelet[2220]: I1112 20:48:29.415188 2220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:48:29.421629 kubelet[2220]: E1112 20:48:29.421500 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.78.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.497449 kubelet[2220]: I1112 20:48:29.496520 2220 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:48:29.507639 kubelet[2220]: I1112 20:48:29.499370 2220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510170 2220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510254 2220 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510273 2220 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510496 2220 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510710 2220 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510742 2220 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:48:29.511310 kubelet[2220]: I1112 20:48:29.510784 2220 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:48:29.511926 kubelet[2220]: I1112 20:48:29.510811 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:48:29.515876 kubelet[2220]: W1112 20:48:29.514456 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.78.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-d-ef96bd2a01&limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.515876 kubelet[2220]: E1112 20:48:29.514563 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.78.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-d-ef96bd2a01&limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.515876 kubelet[2220]: W1112 20:48:29.515030 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.78.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.515876 kubelet[2220]: E1112 20:48:29.515098 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.78.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.516490 kubelet[2220]: I1112 20:48:29.516456 2220 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:48:29.523562 kubelet[2220]: I1112 20:48:29.521906 2220 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:48:29.523562 kubelet[2220]: W1112 20:48:29.522050 2220 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:48:29.523562 kubelet[2220]: I1112 20:48:29.522988 2220 server.go:1256] "Started kubelet" Nov 12 20:48:29.530895 kubelet[2220]: I1112 20:48:29.529995 2220 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:48:29.531723 kubelet[2220]: I1112 20:48:29.531687 2220 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:48:29.538906 kubelet[2220]: I1112 20:48:29.538818 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:48:29.543797 kubelet[2220]: I1112 20:48:29.542293 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:48:29.543797 kubelet[2220]: I1112 20:48:29.542651 2220 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:48:29.556518 kubelet[2220]: I1112 20:48:29.556453 2220 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:48:29.576096 kubelet[2220]: I1112 20:48:29.575952 2220 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:48:29.578899 kubelet[2220]: E1112 20:48:29.578270 2220 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.78.43:6443/api/v1/namespaces/default/events\": dial tcp 143.198.78.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-d-ef96bd2a01.1807539b9a6a606c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-d-ef96bd2a01,UID:ci-4081.2.0-d-ef96bd2a01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-d-ef96bd2a01,},FirstTimestamp:2024-11-12 20:48:29.522944108 +0000 UTC m=+0.637164373,LastTimestamp:2024-11-12 20:48:29.522944108 +0000 UTC m=+0.637164373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-d-ef96bd2a01,}" Nov 12 20:48:29.580161 kubelet[2220]: I1112 20:48:29.579806 2220 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:48:29.589025 kubelet[2220]: E1112 20:48:29.586444 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-d-ef96bd2a01?timeout=10s\": dial tcp 143.198.78.43:6443: connect: connection refused" interval="200ms" Nov 12 20:48:29.589025 kubelet[2220]: I1112 20:48:29.587124 2220 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:48:29.589025 kubelet[2220]: I1112 20:48:29.587274 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:48:29.609007 kubelet[2220]: I1112 20:48:29.598768 2220 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:48:29.610751 kubelet[2220]: W1112 20:48:29.610663 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.78.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.614209 kubelet[2220]: E1112 20:48:29.612086 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.78.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.628039 kubelet[2220]: E1112 20:48:29.626504 2220 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:48:29.629975 kubelet[2220]: I1112 20:48:29.629816 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:48:29.638912 kubelet[2220]: I1112 20:48:29.638604 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:48:29.638912 kubelet[2220]: I1112 20:48:29.638651 2220 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:48:29.638912 kubelet[2220]: I1112 20:48:29.638687 2220 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:48:29.638912 kubelet[2220]: E1112 20:48:29.638778 2220 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:48:29.643336 kubelet[2220]: W1112 20:48:29.643244 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.78.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.643881 kubelet[2220]: E1112 20:48:29.643355 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.78.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:29.646241 kubelet[2220]: I1112 20:48:29.646205 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:48:29.649661 kubelet[2220]: I1112 20:48:29.648990 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:48:29.649661 kubelet[2220]: I1112 20:48:29.649045 2220 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:29.656420 kubelet[2220]: I1112 20:48:29.656374 2220 policy_none.go:49] "None policy: Start" Nov 12 20:48:29.658319 kubelet[2220]: I1112 20:48:29.658284 2220 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:48:29.659246 kubelet[2220]: I1112 20:48:29.658805 2220 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.659377 kubelet[2220]: E1112 20:48:29.659346 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.78.43:6443/api/v1/nodes\": dial tcp 143.198.78.43:6443: connect: connection refused" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.659461 kubelet[2220]: I1112 20:48:29.659449 2220 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:48:29.670631 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:48:29.693138 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:48:29.710375 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:48:29.723640 kubelet[2220]: I1112 20:48:29.723576 2220 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:48:29.724885 kubelet[2220]: I1112 20:48:29.724722 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:48:29.728328 kubelet[2220]: E1112 20:48:29.728184 2220 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-d-ef96bd2a01\" not found" Nov 12 20:48:29.740065 kubelet[2220]: I1112 20:48:29.739434 2220 topology_manager.go:215] "Topology Admit Handler" podUID="a892e6424b55606724004bc7715eda8c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.740935 kubelet[2220]: I1112 20:48:29.740827 2220 topology_manager.go:215] "Topology Admit Handler" podUID="3566c0374af167c1f84542f163c09c9c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.742986 kubelet[2220]: I1112 20:48:29.742949 2220 topology_manager.go:215] "Topology Admit Handler" podUID="b61c0321fafc79ffe0e55f909b115e97" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.760101 systemd[1]: Created slice kubepods-burstable-poda892e6424b55606724004bc7715eda8c.slice - libcontainer container kubepods-burstable-poda892e6424b55606724004bc7715eda8c.slice. Nov 12 20:48:29.771406 systemd[1]: Created slice kubepods-burstable-pod3566c0374af167c1f84542f163c09c9c.slice - libcontainer container kubepods-burstable-pod3566c0374af167c1f84542f163c09c9c.slice. Nov 12 20:48:29.784903 kubelet[2220]: I1112 20:48:29.784358 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.784903 kubelet[2220]: I1112 20:48:29.784500 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.784903 kubelet[2220]: I1112 20:48:29.784650 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.784903 kubelet[2220]: I1112 20:48:29.784711 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.785324 kubelet[2220]: I1112 20:48:29.784921 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.785324 kubelet[2220]: I1112 20:48:29.784969 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.785324 kubelet[2220]: I1112 20:48:29.785002 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b61c0321fafc79ffe0e55f909b115e97-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-d-ef96bd2a01\" (UID: \"b61c0321fafc79ffe0e55f909b115e97\") " pod="kube-system/kube-scheduler-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.785324 kubelet[2220]: I1112 20:48:29.785046 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.785324 kubelet[2220]: I1112 20:48:29.785088 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.788188 kubelet[2220]: E1112 20:48:29.788135 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-d-ef96bd2a01?timeout=10s\": dial tcp 143.198.78.43:6443: connect: connection refused" interval="400ms" Nov 12 20:48:29.790034 systemd[1]: Created slice kubepods-burstable-podb61c0321fafc79ffe0e55f909b115e97.slice - libcontainer container kubepods-burstable-podb61c0321fafc79ffe0e55f909b115e97.slice. Nov 12 20:48:29.862261 kubelet[2220]: I1112 20:48:29.861541 2220 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:29.862261 kubelet[2220]: E1112 20:48:29.862107 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.78.43:6443/api/v1/nodes\": dial tcp 143.198.78.43:6443: connect: connection refused" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:30.084524 kubelet[2220]: E1112 20:48:30.070219 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:30.084524 kubelet[2220]: E1112 20:48:30.084233 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:30.088075 containerd[1455]: time="2024-11-12T20:48:30.086351429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-d-ef96bd2a01,Uid:a892e6424b55606724004bc7715eda8c,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:30.092974 containerd[1455]: time="2024-11-12T20:48:30.091532626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-d-ef96bd2a01,Uid:3566c0374af167c1f84542f163c09c9c,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:30.094667 kubelet[2220]: E1112 20:48:30.093638 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:30.096422 containerd[1455]: time="2024-11-12T20:48:30.095722041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-d-ef96bd2a01,Uid:b61c0321fafc79ffe0e55f909b115e97,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:30.196747 kubelet[2220]: E1112 20:48:30.196658 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-d-ef96bd2a01?timeout=10s\": dial tcp 143.198.78.43:6443: connect: connection refused" interval="800ms" Nov 12 20:48:30.268552 kubelet[2220]: I1112 20:48:30.267897 2220 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:30.268552 kubelet[2220]: E1112 20:48:30.268383 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.78.43:6443/api/v1/nodes\": dial tcp 143.198.78.43:6443: connect: connection refused" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:30.725318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953583189.mount: Deactivated successfully. Nov 12 20:48:30.737493 containerd[1455]: time="2024-11-12T20:48:30.737336185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:30.745045 containerd[1455]: time="2024-11-12T20:48:30.744946870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:48:30.748876 containerd[1455]: time="2024-11-12T20:48:30.747217503Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:30.748876 containerd[1455]: time="2024-11-12T20:48:30.748612443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:30.750545 containerd[1455]: time="2024-11-12T20:48:30.750457795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:48:30.759314 containerd[1455]: time="2024-11-12T20:48:30.758929331Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:30.761971 containerd[1455]: time="2024-11-12T20:48:30.761886037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:30.762280 containerd[1455]: time="2024-11-12T20:48:30.762088897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:48:30.763066 containerd[1455]: time="2024-11-12T20:48:30.763006475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.167056ms" Nov 12 20:48:30.767547 containerd[1455]: time="2024-11-12T20:48:30.767469876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.979909ms" Nov 12 20:48:30.774643 containerd[1455]: time="2024-11-12T20:48:30.774568941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 682.882626ms" Nov 12 20:48:30.851803 kubelet[2220]: W1112 20:48:30.851478 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.78.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-d-ef96bd2a01&limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:30.851803 kubelet[2220]: E1112 20:48:30.851582 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.78.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-d-ef96bd2a01&limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:30.925875 kubelet[2220]: W1112 20:48:30.916365 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.78.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:30.925875 kubelet[2220]: E1112 20:48:30.916478 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.78.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.011416 kubelet[2220]: E1112 20:48:31.005321 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-d-ef96bd2a01?timeout=10s\": dial tcp 143.198.78.43:6443: connect: connection refused" interval="1.6s" Nov 12 20:48:31.079042 containerd[1455]: time="2024-11-12T20:48:31.078553982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:31.079042 containerd[1455]: time="2024-11-12T20:48:31.078675681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:31.079042 containerd[1455]: time="2024-11-12T20:48:31.078700789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.079042 containerd[1455]: time="2024-11-12T20:48:31.078872378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.085007 kubelet[2220]: I1112 20:48:31.082605 2220 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:31.085007 kubelet[2220]: E1112 20:48:31.083442 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.78.43:6443/api/v1/nodes\": dial tcp 143.198.78.43:6443: connect: connection refused" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:31.090545 containerd[1455]: time="2024-11-12T20:48:31.090276776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:31.090545 containerd[1455]: time="2024-11-12T20:48:31.090404672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:31.091379 containerd[1455]: time="2024-11-12T20:48:31.090687233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.091379 containerd[1455]: time="2024-11-12T20:48:31.090998753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.101994 containerd[1455]: time="2024-11-12T20:48:31.101780674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:31.102283 containerd[1455]: time="2024-11-12T20:48:31.102235927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:31.102415 containerd[1455]: time="2024-11-12T20:48:31.102387237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.103588 containerd[1455]: time="2024-11-12T20:48:31.103500349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:31.143216 systemd[1]: Started cri-containerd-5feb1240d5c72ab70b181807812969562fdf868f4890ff496259ca7032eb2768.scope - libcontainer container 5feb1240d5c72ab70b181807812969562fdf868f4890ff496259ca7032eb2768. Nov 12 20:48:31.172642 systemd[1]: Started cri-containerd-e95164131576eef6ec09628ca9787f23d26ef29814f4d6e61f375e2fa6105084.scope - libcontainer container e95164131576eef6ec09628ca9787f23d26ef29814f4d6e61f375e2fa6105084. Nov 12 20:48:31.200223 kubelet[2220]: W1112 20:48:31.200141 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.78.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.200223 kubelet[2220]: E1112 20:48:31.200219 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.78.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.202359 kubelet[2220]: W1112 20:48:31.202308 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.78.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.202359 kubelet[2220]: E1112 20:48:31.202357 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.78.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.203625 systemd[1]: Started cri-containerd-ac863f7e17096cb6f74c96c4218771b39912bf4318661a7c0b6c88a8e520debc.scope - libcontainer container ac863f7e17096cb6f74c96c4218771b39912bf4318661a7c0b6c88a8e520debc. Nov 12 20:48:31.303822 containerd[1455]: time="2024-11-12T20:48:31.298232428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-d-ef96bd2a01,Uid:3566c0374af167c1f84542f163c09c9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95164131576eef6ec09628ca9787f23d26ef29814f4d6e61f375e2fa6105084\"" Nov 12 20:48:31.304029 kubelet[2220]: E1112 20:48:31.303355 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:31.315753 containerd[1455]: time="2024-11-12T20:48:31.315370411Z" level=info msg="CreateContainer within sandbox \"e95164131576eef6ec09628ca9787f23d26ef29814f4d6e61f375e2fa6105084\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:48:31.370419 containerd[1455]: time="2024-11-12T20:48:31.369912709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-d-ef96bd2a01,Uid:a892e6424b55606724004bc7715eda8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac863f7e17096cb6f74c96c4218771b39912bf4318661a7c0b6c88a8e520debc\"" Nov 12 20:48:31.373166 kubelet[2220]: E1112 20:48:31.373025 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:31.377891 containerd[1455]: time="2024-11-12T20:48:31.377785821Z" level=info msg="CreateContainer within sandbox \"ac863f7e17096cb6f74c96c4218771b39912bf4318661a7c0b6c88a8e520debc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:48:31.388652 containerd[1455]: time="2024-11-12T20:48:31.388592289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-d-ef96bd2a01,Uid:b61c0321fafc79ffe0e55f909b115e97,Namespace:kube-system,Attempt:0,} returns sandbox id \"5feb1240d5c72ab70b181807812969562fdf868f4890ff496259ca7032eb2768\"" Nov 12 20:48:31.391601 kubelet[2220]: E1112 20:48:31.391559 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:31.401501 containerd[1455]: time="2024-11-12T20:48:31.401412981Z" level=info msg="CreateContainer within sandbox \"5feb1240d5c72ab70b181807812969562fdf868f4890ff496259ca7032eb2768\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:48:31.403144 containerd[1455]: time="2024-11-12T20:48:31.402258229Z" level=info msg="CreateContainer within sandbox \"e95164131576eef6ec09628ca9787f23d26ef29814f4d6e61f375e2fa6105084\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e7e2cc49098e8153c3ee85372dbf7a88b3dd3f7f0e6c9bf9b9c3decf224cbd0\"" Nov 12 20:48:31.406686 containerd[1455]: time="2024-11-12T20:48:31.406598650Z" level=info msg="StartContainer for \"3e7e2cc49098e8153c3ee85372dbf7a88b3dd3f7f0e6c9bf9b9c3decf224cbd0\"" Nov 12 20:48:31.420738 containerd[1455]: time="2024-11-12T20:48:31.420641243Z" level=info msg="CreateContainer within sandbox \"ac863f7e17096cb6f74c96c4218771b39912bf4318661a7c0b6c88a8e520debc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e60b7edb070f5cabde05c9e11ad282a19dcd4dd1e079b00bebb795f3e011658d\"" Nov 12 20:48:31.421612 containerd[1455]: time="2024-11-12T20:48:31.421554138Z" level=info msg="StartContainer for \"e60b7edb070f5cabde05c9e11ad282a19dcd4dd1e079b00bebb795f3e011658d\"" Nov 12 20:48:31.442791 containerd[1455]: time="2024-11-12T20:48:31.442526946Z" level=info msg="CreateContainer within sandbox \"5feb1240d5c72ab70b181807812969562fdf868f4890ff496259ca7032eb2768\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5b029404d96ec53f47f6eb5c1e5a3f204915f33ec3636ab6f8b9996ed3480c60\"" Nov 12 20:48:31.446576 containerd[1455]: time="2024-11-12T20:48:31.446530708Z" level=info msg="StartContainer for \"5b029404d96ec53f47f6eb5c1e5a3f204915f33ec3636ab6f8b9996ed3480c60\"" Nov 12 20:48:31.482501 systemd[1]: Started cri-containerd-3e7e2cc49098e8153c3ee85372dbf7a88b3dd3f7f0e6c9bf9b9c3decf224cbd0.scope - libcontainer container 3e7e2cc49098e8153c3ee85372dbf7a88b3dd3f7f0e6c9bf9b9c3decf224cbd0. Nov 12 20:48:31.513228 kubelet[2220]: E1112 20:48:31.512647 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.78.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.78.43:6443: connect: connection refused Nov 12 20:48:31.518181 systemd[1]: Started cri-containerd-e60b7edb070f5cabde05c9e11ad282a19dcd4dd1e079b00bebb795f3e011658d.scope - libcontainer container e60b7edb070f5cabde05c9e11ad282a19dcd4dd1e079b00bebb795f3e011658d. Nov 12 20:48:31.531734 systemd[1]: Started cri-containerd-5b029404d96ec53f47f6eb5c1e5a3f204915f33ec3636ab6f8b9996ed3480c60.scope - libcontainer container 5b029404d96ec53f47f6eb5c1e5a3f204915f33ec3636ab6f8b9996ed3480c60. Nov 12 20:48:31.681244 containerd[1455]: time="2024-11-12T20:48:31.677458243Z" level=info msg="StartContainer for \"e60b7edb070f5cabde05c9e11ad282a19dcd4dd1e079b00bebb795f3e011658d\" returns successfully" Nov 12 20:48:31.681244 containerd[1455]: time="2024-11-12T20:48:31.677711955Z" level=info msg="StartContainer for \"3e7e2cc49098e8153c3ee85372dbf7a88b3dd3f7f0e6c9bf9b9c3decf224cbd0\" returns successfully" Nov 12 20:48:31.681244 containerd[1455]: time="2024-11-12T20:48:31.677808910Z" level=info msg="StartContainer for \"5b029404d96ec53f47f6eb5c1e5a3f204915f33ec3636ab6f8b9996ed3480c60\" returns successfully" Nov 12 20:48:31.708072 kubelet[2220]: E1112 20:48:31.702781 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:31.711889 kubelet[2220]: E1112 20:48:31.709128 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:32.685459 kubelet[2220]: I1112 20:48:32.685367 2220 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:32.704800 kubelet[2220]: E1112 20:48:32.704759 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:32.708820 kubelet[2220]: E1112 20:48:32.708768 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:32.715961 kubelet[2220]: E1112 20:48:32.715909 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:33.719143 kubelet[2220]: E1112 20:48:33.719082 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:33.719143 kubelet[2220]: E1112 20:48:33.719139 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:33.720538 kubelet[2220]: E1112 20:48:33.720346 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:34.476543 kubelet[2220]: I1112 20:48:34.476457 2220 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:34.518577 kubelet[2220]: I1112 20:48:34.518503 2220 apiserver.go:52] "Watching apiserver" Nov 12 20:48:34.581591 kubelet[2220]: I1112 20:48:34.581521 2220 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:48:34.592404 kubelet[2220]: E1112 20:48:34.592333 2220 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.0-d-ef96bd2a01.1807539b9a6a606c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-d-ef96bd2a01,UID:ci-4081.2.0-d-ef96bd2a01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-d-ef96bd2a01,},FirstTimestamp:2024-11-12 20:48:29.522944108 +0000 UTC m=+0.637164373,LastTimestamp:2024-11-12 20:48:29.522944108 +0000 UTC m=+0.637164373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-d-ef96bd2a01,}" Nov 12 20:48:34.602069 kubelet[2220]: E1112 20:48:34.602018 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 12 20:48:35.089368 kubelet[2220]: E1112 20:48:35.089305 2220 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:35.090093 kubelet[2220]: E1112 20:48:35.090056 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:37.707051 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-7.scope)... Nov 12 20:48:37.707566 systemd[1]: Reloading... Nov 12 20:48:37.934893 zram_generator::config[2535]: No configuration found. Nov 12 20:48:38.226543 update_engine[1450]: I20241112 20:48:38.225355 1450 update_attempter.cc:509] Updating boot flags... Nov 12 20:48:38.256017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:38.324947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2582) Nov 12 20:48:38.396925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2581) Nov 12 20:48:38.483147 systemd[1]: Reloading finished in 774 ms. Nov 12 20:48:38.503994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2581) Nov 12 20:48:38.649638 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:38.654775 kubelet[2220]: I1112 20:48:38.654721 2220 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:48:38.681358 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:48:38.681688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:38.681777 systemd[1]: kubelet.service: Consumed 1.141s CPU time, 108.9M memory peak, 0B memory swap peak. Nov 12 20:48:38.692121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:38.866280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:38.880494 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:48:39.030303 kubelet[2601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:39.033915 kubelet[2601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:48:39.033915 kubelet[2601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:39.033915 kubelet[2601]: I1112 20:48:39.033141 2601 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:48:39.044463 kubelet[2601]: I1112 20:48:39.044395 2601 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:48:39.044463 kubelet[2601]: I1112 20:48:39.044446 2601 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:48:39.046058 kubelet[2601]: I1112 20:48:39.045971 2601 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:48:39.052984 kubelet[2601]: I1112 20:48:39.052930 2601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:48:39.066061 kubelet[2601]: I1112 20:48:39.065627 2601 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:48:39.101679 kubelet[2601]: I1112 20:48:39.100748 2601 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:48:39.104387 kubelet[2601]: I1112 20:48:39.104294 2601 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:48:39.105814 kubelet[2601]: I1112 20:48:39.105759 2601 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106177 2601 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106218 2601 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106305 2601 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106502 2601 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106539 2601 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106597 2601 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:48:39.106732 kubelet[2601]: I1112 20:48:39.106626 2601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:48:39.111507 kubelet[2601]: I1112 20:48:39.111469 2601 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:48:39.113132 kubelet[2601]: I1112 20:48:39.112074 2601 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:48:39.113132 kubelet[2601]: I1112 20:48:39.112762 2601 server.go:1256] "Started kubelet" Nov 12 20:48:39.122004 kubelet[2601]: I1112 20:48:39.119927 2601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:48:39.144205 kubelet[2601]: I1112 20:48:39.143049 2601 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:48:39.146431 kubelet[2601]: I1112 20:48:39.146389 2601 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:48:39.150732 kubelet[2601]: I1112 20:48:39.150680 2601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:48:39.151924 kubelet[2601]: I1112 20:48:39.151387 2601 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:48:39.156788 kubelet[2601]: I1112 20:48:39.156742 2601 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:48:39.162605 kubelet[2601]: I1112 20:48:39.159115 2601 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:48:39.162605 kubelet[2601]: I1112 20:48:39.159331 2601 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:48:39.179000 kubelet[2601]: I1112 20:48:39.178949 2601 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:48:39.179216 kubelet[2601]: I1112 20:48:39.179099 2601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:48:39.192467 kubelet[2601]: E1112 20:48:39.192378 2601 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:48:39.193950 kubelet[2601]: I1112 20:48:39.193340 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:48:39.195906 kubelet[2601]: I1112 20:48:39.195099 2601 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:48:39.197347 kubelet[2601]: I1112 20:48:39.197310 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:48:39.197541 kubelet[2601]: I1112 20:48:39.197528 2601 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:48:39.197693 kubelet[2601]: I1112 20:48:39.197676 2601 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:48:39.198288 kubelet[2601]: E1112 20:48:39.197877 2601 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:48:39.259113 kubelet[2601]: E1112 20:48:39.259074 2601 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Nov 12 20:48:39.265038 kubelet[2601]: I1112 20:48:39.264991 2601 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.298460 kubelet[2601]: E1112 20:48:39.298275 2601 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:48:39.312958 kubelet[2601]: I1112 20:48:39.311534 2601 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.312958 kubelet[2601]: I1112 20:48:39.311690 2601 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.375710 kubelet[2601]: I1112 20:48:39.375534 2601 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:48:39.376662 kubelet[2601]: I1112 20:48:39.376600 2601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:48:39.377383 kubelet[2601]: I1112 20:48:39.376665 2601 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:39.377570 kubelet[2601]: I1112 20:48:39.377509 2601 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:48:39.377570 kubelet[2601]: I1112 20:48:39.377546 2601 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:48:39.377570 kubelet[2601]: I1112 20:48:39.377558 2601 policy_none.go:49] "None policy: Start" Nov 12 20:48:39.382227 kubelet[2601]: I1112 20:48:39.381677 2601 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:48:39.382227 kubelet[2601]: I1112 20:48:39.381732 2601 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:48:39.383828 kubelet[2601]: I1112 20:48:39.382387 2601 state_mem.go:75] "Updated machine memory state" Nov 12 20:48:39.400536 kubelet[2601]: I1112 20:48:39.395403 2601 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:48:39.400536 kubelet[2601]: I1112 20:48:39.396698 2601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:48:39.500422 kubelet[2601]: I1112 20:48:39.499377 2601 topology_manager.go:215] "Topology Admit Handler" podUID="a892e6424b55606724004bc7715eda8c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.500422 kubelet[2601]: I1112 20:48:39.499521 2601 topology_manager.go:215] "Topology Admit Handler" podUID="3566c0374af167c1f84542f163c09c9c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.500422 kubelet[2601]: I1112 20:48:39.499563 2601 topology_manager.go:215] "Topology Admit Handler" podUID="b61c0321fafc79ffe0e55f909b115e97" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.516712 kubelet[2601]: W1112 20:48:39.515100 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:39.516712 kubelet[2601]: W1112 20:48:39.515482 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:39.520395 kubelet[2601]: W1112 20:48:39.518085 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:39.662419 kubelet[2601]: I1112 20:48:39.661359 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662419 kubelet[2601]: I1112 20:48:39.661431 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662419 kubelet[2601]: I1112 20:48:39.661462 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662419 kubelet[2601]: I1112 20:48:39.661507 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a892e6424b55606724004bc7715eda8c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-d-ef96bd2a01\" (UID: \"a892e6424b55606724004bc7715eda8c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662419 kubelet[2601]: I1112 20:48:39.661541 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662752 kubelet[2601]: I1112 20:48:39.661578 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662752 kubelet[2601]: I1112 20:48:39.661610 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662752 kubelet[2601]: I1112 20:48:39.661656 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3566c0374af167c1f84542f163c09c9c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-d-ef96bd2a01\" (UID: \"3566c0374af167c1f84542f163c09c9c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.662752 kubelet[2601]: I1112 20:48:39.661691 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b61c0321fafc79ffe0e55f909b115e97-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-d-ef96bd2a01\" (UID: \"b61c0321fafc79ffe0e55f909b115e97\") " pod="kube-system/kube-scheduler-ci-4081.2.0-d-ef96bd2a01" Nov 12 20:48:39.818663 kubelet[2601]: E1112 20:48:39.817971 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:39.822059 kubelet[2601]: E1112 20:48:39.822003 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:39.822425 kubelet[2601]: E1112 20:48:39.822362 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:40.108299 kubelet[2601]: I1112 20:48:40.108227 2601 apiserver.go:52] "Watching apiserver" Nov 12 20:48:40.159723 kubelet[2601]: I1112 20:48:40.159654 2601 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:48:40.307467 kubelet[2601]: E1112 20:48:40.307407 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:40.309905 kubelet[2601]: E1112 20:48:40.308497 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:40.313899 kubelet[2601]: E1112 20:48:40.312226 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:40.377536 kubelet[2601]: I1112 20:48:40.376493 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-d-ef96bd2a01" podStartSLOduration=1.376418712 podStartE2EDuration="1.376418712s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:40.376327707 +0000 UTC m=+1.477316666" watchObservedRunningTime="2024-11-12 20:48:40.376418712 +0000 UTC m=+1.477407672" Nov 12 20:48:40.432236 kubelet[2601]: I1112 20:48:40.431789 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-d-ef96bd2a01" podStartSLOduration=1.431727857 podStartE2EDuration="1.431727857s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:40.408515061 +0000 UTC m=+1.509504022" watchObservedRunningTime="2024-11-12 20:48:40.431727857 +0000 UTC m=+1.532716813" Nov 12 20:48:41.310888 kubelet[2601]: E1112 20:48:41.308635 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:43.121024 kubelet[2601]: E1112 20:48:43.120149 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:43.165934 kubelet[2601]: I1112 20:48:43.165883 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-d-ef96bd2a01" podStartSLOduration=4.165788253 podStartE2EDuration="4.165788253s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:40.432188789 +0000 UTC m=+1.533177741" watchObservedRunningTime="2024-11-12 20:48:43.165788253 +0000 UTC m=+4.266777219" Nov 12 20:48:43.316944 kubelet[2601]: E1112 20:48:43.314905 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:44.324993 kubelet[2601]: E1112 20:48:44.324770 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:46.263010 sudo[1654]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:46.272728 sshd[1651]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:46.285911 systemd[1]: sshd@6-143.198.78.43:22-139.178.68.195:43914.service: Deactivated successfully. Nov 12 20:48:46.291719 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:48:46.292440 systemd[1]: session-7.scope: Consumed 6.972s CPU time, 188.7M memory peak, 0B memory swap peak. Nov 12 20:48:46.294524 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:48:46.300823 systemd-logind[1449]: Removed session 7. Nov 12 20:48:49.266205 kubelet[2601]: E1112 20:48:49.266143 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:49.339953 kubelet[2601]: E1112 20:48:49.338646 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:49.513706 kubelet[2601]: E1112 20:48:49.510837 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:50.343322 kubelet[2601]: E1112 20:48:50.342928 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:51.453795 kubelet[2601]: I1112 20:48:51.453750 2601 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:48:51.456514 containerd[1455]: time="2024-11-12T20:48:51.456316007Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:48:51.457311 kubelet[2601]: I1112 20:48:51.456781 2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:48:52.227322 kubelet[2601]: I1112 20:48:52.227252 2601 topology_manager.go:215] "Topology Admit Handler" podUID="26f99706-9926-41b8-aa7b-9c93ddfeb49f" podNamespace="kube-system" podName="kube-proxy-lcbfq" Nov 12 20:48:52.246406 systemd[1]: Created slice kubepods-besteffort-pod26f99706_9926_41b8_aa7b_9c93ddfeb49f.slice - libcontainer container kubepods-besteffort-pod26f99706_9926_41b8_aa7b_9c93ddfeb49f.slice. Nov 12 20:48:52.358990 kubelet[2601]: I1112 20:48:52.358927 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26f99706-9926-41b8-aa7b-9c93ddfeb49f-lib-modules\") pod \"kube-proxy-lcbfq\" (UID: \"26f99706-9926-41b8-aa7b-9c93ddfeb49f\") " pod="kube-system/kube-proxy-lcbfq" Nov 12 20:48:52.358990 kubelet[2601]: I1112 20:48:52.359011 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwkr\" (UniqueName: \"kubernetes.io/projected/26f99706-9926-41b8-aa7b-9c93ddfeb49f-kube-api-access-mrwkr\") pod \"kube-proxy-lcbfq\" (UID: \"26f99706-9926-41b8-aa7b-9c93ddfeb49f\") " pod="kube-system/kube-proxy-lcbfq" Nov 12 20:48:52.359233 kubelet[2601]: I1112 20:48:52.359050 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26f99706-9926-41b8-aa7b-9c93ddfeb49f-xtables-lock\") pod \"kube-proxy-lcbfq\" (UID: \"26f99706-9926-41b8-aa7b-9c93ddfeb49f\") " pod="kube-system/kube-proxy-lcbfq" Nov 12 20:48:52.359233 kubelet[2601]: I1112 20:48:52.359102 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26f99706-9926-41b8-aa7b-9c93ddfeb49f-kube-proxy\") pod \"kube-proxy-lcbfq\" (UID: \"26f99706-9926-41b8-aa7b-9c93ddfeb49f\") " pod="kube-system/kube-proxy-lcbfq" Nov 12 20:48:52.562136 kubelet[2601]: I1112 20:48:52.561929 2601 topology_manager.go:215] "Topology Admit Handler" podUID="5439df44-aace-462e-94a2-7a04a1897855" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-krhmq" Nov 12 20:48:52.563270 kubelet[2601]: E1112 20:48:52.562972 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:52.568093 containerd[1455]: time="2024-11-12T20:48:52.567050510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcbfq,Uid:26f99706-9926-41b8-aa7b-9c93ddfeb49f,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:52.585462 systemd[1]: Created slice kubepods-besteffort-pod5439df44_aace_462e_94a2_7a04a1897855.slice - libcontainer container kubepods-besteffort-pod5439df44_aace_462e_94a2_7a04a1897855.slice. Nov 12 20:48:52.636994 containerd[1455]: time="2024-11-12T20:48:52.636456125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:52.636994 containerd[1455]: time="2024-11-12T20:48:52.636560283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:52.636994 containerd[1455]: time="2024-11-12T20:48:52.636578649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:52.636994 containerd[1455]: time="2024-11-12T20:48:52.636721986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:52.661230 kubelet[2601]: I1112 20:48:52.661173 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zw7f\" (UniqueName: \"kubernetes.io/projected/5439df44-aace-462e-94a2-7a04a1897855-kube-api-access-2zw7f\") pod \"tigera-operator-56b74f76df-krhmq\" (UID: \"5439df44-aace-462e-94a2-7a04a1897855\") " pod="tigera-operator/tigera-operator-56b74f76df-krhmq" Nov 12 20:48:52.661414 kubelet[2601]: I1112 20:48:52.661260 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5439df44-aace-462e-94a2-7a04a1897855-var-lib-calico\") pod \"tigera-operator-56b74f76df-krhmq\" (UID: \"5439df44-aace-462e-94a2-7a04a1897855\") " pod="tigera-operator/tigera-operator-56b74f76df-krhmq" Nov 12 20:48:52.674217 systemd[1]: Started cri-containerd-d0a1c1ddb56c55e6c3073cf3a72f19512d0802250c33d8c85f09d73c368af4bb.scope - libcontainer container d0a1c1ddb56c55e6c3073cf3a72f19512d0802250c33d8c85f09d73c368af4bb. Nov 12 20:48:52.726719 containerd[1455]: time="2024-11-12T20:48:52.726656434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcbfq,Uid:26f99706-9926-41b8-aa7b-9c93ddfeb49f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a1c1ddb56c55e6c3073cf3a72f19512d0802250c33d8c85f09d73c368af4bb\"" Nov 12 20:48:52.729378 kubelet[2601]: E1112 20:48:52.729219 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:52.740929 containerd[1455]: time="2024-11-12T20:48:52.740799993Z" level=info msg="CreateContainer within sandbox \"d0a1c1ddb56c55e6c3073cf3a72f19512d0802250c33d8c85f09d73c368af4bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:48:52.785158 containerd[1455]: time="2024-11-12T20:48:52.785084062Z" level=info msg="CreateContainer within sandbox \"d0a1c1ddb56c55e6c3073cf3a72f19512d0802250c33d8c85f09d73c368af4bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11262f3412e1c98bd871b57bee4fe5d76e8ecfa8b8f69a24d8e92cebf8c4aee4\"" Nov 12 20:48:52.788924 containerd[1455]: time="2024-11-12T20:48:52.788507401Z" level=info msg="StartContainer for \"11262f3412e1c98bd871b57bee4fe5d76e8ecfa8b8f69a24d8e92cebf8c4aee4\"" Nov 12 20:48:52.841427 systemd[1]: Started cri-containerd-11262f3412e1c98bd871b57bee4fe5d76e8ecfa8b8f69a24d8e92cebf8c4aee4.scope - libcontainer container 11262f3412e1c98bd871b57bee4fe5d76e8ecfa8b8f69a24d8e92cebf8c4aee4. Nov 12 20:48:52.899216 containerd[1455]: time="2024-11-12T20:48:52.899101168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-krhmq,Uid:5439df44-aace-462e-94a2-7a04a1897855,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:48:52.902344 containerd[1455]: time="2024-11-12T20:48:52.902184174Z" level=info msg="StartContainer for \"11262f3412e1c98bd871b57bee4fe5d76e8ecfa8b8f69a24d8e92cebf8c4aee4\" returns successfully" Nov 12 20:48:52.961117 containerd[1455]: time="2024-11-12T20:48:52.960593917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:52.961117 containerd[1455]: time="2024-11-12T20:48:52.960709469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:52.961117 containerd[1455]: time="2024-11-12T20:48:52.960736164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:52.961117 containerd[1455]: time="2024-11-12T20:48:52.960933236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:53.013026 systemd[1]: Started cri-containerd-817e2dcb28ca635b6560e641c93c38487b83d2ff9b0a91ae57bd87f74e4babdb.scope - libcontainer container 817e2dcb28ca635b6560e641c93c38487b83d2ff9b0a91ae57bd87f74e4babdb. Nov 12 20:48:53.121061 containerd[1455]: time="2024-11-12T20:48:53.120400896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-krhmq,Uid:5439df44-aace-462e-94a2-7a04a1897855,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"817e2dcb28ca635b6560e641c93c38487b83d2ff9b0a91ae57bd87f74e4babdb\"" Nov 12 20:48:53.124553 containerd[1455]: time="2024-11-12T20:48:53.124497098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:48:53.365958 kubelet[2601]: E1112 20:48:53.364486 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:48:53.387575 kubelet[2601]: I1112 20:48:53.386735 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lcbfq" podStartSLOduration=1.386670358 podStartE2EDuration="1.386670358s" podCreationTimestamp="2024-11-12 20:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:53.386316352 +0000 UTC m=+14.487305319" watchObservedRunningTime="2024-11-12 20:48:53.386670358 +0000 UTC m=+14.487659318" Nov 12 20:48:53.487186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314623830.mount: Deactivated successfully. Nov 12 20:48:55.801029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952288820.mount: Deactivated successfully. Nov 12 20:48:57.354312 containerd[1455]: time="2024-11-12T20:48:57.352768577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:57.355093 containerd[1455]: time="2024-11-12T20:48:57.355039146Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763359" Nov 12 20:48:57.355296 containerd[1455]: time="2024-11-12T20:48:57.355186209Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:57.358137 containerd[1455]: time="2024-11-12T20:48:57.358056143Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:57.359613 containerd[1455]: time="2024-11-12T20:48:57.359512822Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 4.234736501s" Nov 12 20:48:57.359830 containerd[1455]: time="2024-11-12T20:48:57.359801705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:48:57.479342 containerd[1455]: time="2024-11-12T20:48:57.479285153Z" level=info msg="CreateContainer within sandbox \"817e2dcb28ca635b6560e641c93c38487b83d2ff9b0a91ae57bd87f74e4babdb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:48:57.516986 containerd[1455]: time="2024-11-12T20:48:57.516882299Z" level=info msg="CreateContainer within sandbox \"817e2dcb28ca635b6560e641c93c38487b83d2ff9b0a91ae57bd87f74e4babdb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0b2d10ff1e5bc3762c5fc2c8c22a65de85a9c3d7b69d681423fc8b0d469626f4\"" Nov 12 20:48:57.518213 containerd[1455]: time="2024-11-12T20:48:57.518154217Z" level=info msg="StartContainer for \"0b2d10ff1e5bc3762c5fc2c8c22a65de85a9c3d7b69d681423fc8b0d469626f4\"" Nov 12 20:48:57.577003 systemd[1]: Started cri-containerd-0b2d10ff1e5bc3762c5fc2c8c22a65de85a9c3d7b69d681423fc8b0d469626f4.scope - libcontainer container 0b2d10ff1e5bc3762c5fc2c8c22a65de85a9c3d7b69d681423fc8b0d469626f4. Nov 12 20:48:57.632221 containerd[1455]: time="2024-11-12T20:48:57.631271747Z" level=info msg="StartContainer for \"0b2d10ff1e5bc3762c5fc2c8c22a65de85a9c3d7b69d681423fc8b0d469626f4\" returns successfully" Nov 12 20:48:58.509814 kubelet[2601]: I1112 20:48:58.509687 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-krhmq" podStartSLOduration=2.271423317 podStartE2EDuration="6.509631447s" podCreationTimestamp="2024-11-12 20:48:52 +0000 UTC" firstStartedPulling="2024-11-12 20:48:53.122978497 +0000 UTC m=+14.223967456" lastFinishedPulling="2024-11-12 20:48:57.361186651 +0000 UTC m=+18.462175586" observedRunningTime="2024-11-12 20:48:58.509542222 +0000 UTC m=+19.610531180" watchObservedRunningTime="2024-11-12 20:48:58.509631447 +0000 UTC m=+19.610620408" Nov 12 20:49:01.470604 kubelet[2601]: I1112 20:49:01.470531 2601 topology_manager.go:215] "Topology Admit Handler" podUID="941058f4-ba40-4f67-9d6a-d6d602b479fc" podNamespace="calico-system" podName="calico-typha-6d6fcff85c-cgqwd" Nov 12 20:49:01.501711 systemd[1]: Created slice kubepods-besteffort-pod941058f4_ba40_4f67_9d6a_d6d602b479fc.slice - libcontainer container kubepods-besteffort-pod941058f4_ba40_4f67_9d6a_d6d602b479fc.slice. Nov 12 20:49:01.594919 kubelet[2601]: I1112 20:49:01.592639 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xvc\" (UniqueName: \"kubernetes.io/projected/941058f4-ba40-4f67-9d6a-d6d602b479fc-kube-api-access-s8xvc\") pod \"calico-typha-6d6fcff85c-cgqwd\" (UID: \"941058f4-ba40-4f67-9d6a-d6d602b479fc\") " pod="calico-system/calico-typha-6d6fcff85c-cgqwd" Nov 12 20:49:01.594919 kubelet[2601]: I1112 20:49:01.592727 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/941058f4-ba40-4f67-9d6a-d6d602b479fc-tigera-ca-bundle\") pod \"calico-typha-6d6fcff85c-cgqwd\" (UID: \"941058f4-ba40-4f67-9d6a-d6d602b479fc\") " pod="calico-system/calico-typha-6d6fcff85c-cgqwd" Nov 12 20:49:01.594919 kubelet[2601]: I1112 20:49:01.592761 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/941058f4-ba40-4f67-9d6a-d6d602b479fc-typha-certs\") pod \"calico-typha-6d6fcff85c-cgqwd\" (UID: \"941058f4-ba40-4f67-9d6a-d6d602b479fc\") " pod="calico-system/calico-typha-6d6fcff85c-cgqwd" Nov 12 20:49:01.817655 kubelet[2601]: E1112 20:49:01.815698 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:01.828327 containerd[1455]: time="2024-11-12T20:49:01.825241649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6fcff85c-cgqwd,Uid:941058f4-ba40-4f67-9d6a-d6d602b479fc,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:01.955500 kubelet[2601]: I1112 20:49:01.954109 2601 topology_manager.go:215] "Topology Admit Handler" podUID="473c4e8c-8197-416d-85b3-2caf7b39c20f" podNamespace="calico-system" podName="calico-node-px675" Nov 12 20:49:01.963028 containerd[1455]: time="2024-11-12T20:49:01.959621109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:01.963028 containerd[1455]: time="2024-11-12T20:49:01.959752345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:01.963028 containerd[1455]: time="2024-11-12T20:49:01.959775781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:01.963028 containerd[1455]: time="2024-11-12T20:49:01.960808051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:02.009535 systemd[1]: Created slice kubepods-besteffort-pod473c4e8c_8197_416d_85b3_2caf7b39c20f.slice - libcontainer container kubepods-besteffort-pod473c4e8c_8197_416d_85b3_2caf7b39c20f.slice. Nov 12 20:49:02.063446 systemd[1]: Started cri-containerd-304bf7fa8855748b838db2f319335528079f97960fbe964e57523bc36df0cbfa.scope - libcontainer container 304bf7fa8855748b838db2f319335528079f97960fbe964e57523bc36df0cbfa. Nov 12 20:49:02.110008 kubelet[2601]: I1112 20:49:02.108240 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-policysync\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110008 kubelet[2601]: I1112 20:49:02.108325 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-cni-log-dir\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110008 kubelet[2601]: I1112 20:49:02.108370 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-var-run-calico\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110008 kubelet[2601]: I1112 20:49:02.108404 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-xtables-lock\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110008 kubelet[2601]: I1112 20:49:02.108439 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-var-lib-calico\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110404 kubelet[2601]: I1112 20:49:02.108492 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwrj\" (UniqueName: \"kubernetes.io/projected/473c4e8c-8197-416d-85b3-2caf7b39c20f-kube-api-access-rkwrj\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110404 kubelet[2601]: I1112 20:49:02.108523 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-cni-bin-dir\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110404 kubelet[2601]: I1112 20:49:02.108556 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/473c4e8c-8197-416d-85b3-2caf7b39c20f-node-certs\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110404 kubelet[2601]: I1112 20:49:02.108591 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-flexvol-driver-host\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110404 kubelet[2601]: I1112 20:49:02.108626 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-cni-net-dir\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110620 kubelet[2601]: I1112 20:49:02.108662 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/473c4e8c-8197-416d-85b3-2caf7b39c20f-lib-modules\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.110620 kubelet[2601]: I1112 20:49:02.108693 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473c4e8c-8197-416d-85b3-2caf7b39c20f-tigera-ca-bundle\") pod \"calico-node-px675\" (UID: \"473c4e8c-8197-416d-85b3-2caf7b39c20f\") " pod="calico-system/calico-node-px675" Nov 12 20:49:02.210989 kubelet[2601]: I1112 20:49:02.208180 2601 topology_manager.go:215] "Topology Admit Handler" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" podNamespace="calico-system" podName="csi-node-driver-8h2nz" Nov 12 20:49:02.212369 kubelet[2601]: E1112 20:49:02.212041 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:02.245044 kubelet[2601]: E1112 20:49:02.244805 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.245044 kubelet[2601]: W1112 20:49:02.244962 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.248316 kubelet[2601]: E1112 20:49:02.247905 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.255753 kubelet[2601]: E1112 20:49:02.255531 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.255753 kubelet[2601]: W1112 20:49:02.255588 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.255753 kubelet[2601]: E1112 20:49:02.255624 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.260497 kubelet[2601]: E1112 20:49:02.260104 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.260497 kubelet[2601]: W1112 20:49:02.260285 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.261367 kubelet[2601]: E1112 20:49:02.261092 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.264743 kubelet[2601]: E1112 20:49:02.264427 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.264743 kubelet[2601]: W1112 20:49:02.264459 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.268103 kubelet[2601]: E1112 20:49:02.267456 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.268103 kubelet[2601]: E1112 20:49:02.267684 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.268103 kubelet[2601]: W1112 20:49:02.267702 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.268750 kubelet[2601]: E1112 20:49:02.268225 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.270623 kubelet[2601]: E1112 20:49:02.270128 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.270623 kubelet[2601]: W1112 20:49:02.270161 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.270623 kubelet[2601]: E1112 20:49:02.270211 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.272369 kubelet[2601]: E1112 20:49:02.272067 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.272369 kubelet[2601]: W1112 20:49:02.272096 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.272369 kubelet[2601]: E1112 20:49:02.272236 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.275493 kubelet[2601]: E1112 20:49:02.274387 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.275493 kubelet[2601]: W1112 20:49:02.274423 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.275493 kubelet[2601]: E1112 20:49:02.274455 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.276305 kubelet[2601]: E1112 20:49:02.276032 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.276305 kubelet[2601]: W1112 20:49:02.276063 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.276305 kubelet[2601]: E1112 20:49:02.276095 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.277651 kubelet[2601]: E1112 20:49:02.277590 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.277651 kubelet[2601]: W1112 20:49:02.277618 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.278096 kubelet[2601]: E1112 20:49:02.277922 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.278786 kubelet[2601]: E1112 20:49:02.278418 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.278786 kubelet[2601]: W1112 20:49:02.278672 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.278786 kubelet[2601]: E1112 20:49:02.278709 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.280508 kubelet[2601]: E1112 20:49:02.280218 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.280508 kubelet[2601]: W1112 20:49:02.280254 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.280508 kubelet[2601]: E1112 20:49:02.280285 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.281384 kubelet[2601]: E1112 20:49:02.281164 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.281384 kubelet[2601]: W1112 20:49:02.281191 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.281384 kubelet[2601]: E1112 20:49:02.281222 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.282079 kubelet[2601]: E1112 20:49:02.281902 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.282079 kubelet[2601]: W1112 20:49:02.281924 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.282079 kubelet[2601]: E1112 20:49:02.281950 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.282470 kubelet[2601]: E1112 20:49:02.282258 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.282470 kubelet[2601]: W1112 20:49:02.282270 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.282470 kubelet[2601]: E1112 20:49:02.282289 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.284739 kubelet[2601]: E1112 20:49:02.284407 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.284739 kubelet[2601]: W1112 20:49:02.284441 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.284739 kubelet[2601]: E1112 20:49:02.284481 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.286512 kubelet[2601]: E1112 20:49:02.286012 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.286512 kubelet[2601]: W1112 20:49:02.286043 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.286512 kubelet[2601]: E1112 20:49:02.286079 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.288041 kubelet[2601]: E1112 20:49:02.287126 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.288041 kubelet[2601]: W1112 20:49:02.287206 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.288041 kubelet[2601]: E1112 20:49:02.287904 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.288658 kubelet[2601]: E1112 20:49:02.288633 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.289020 kubelet[2601]: W1112 20:49:02.288867 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.289020 kubelet[2601]: E1112 20:49:02.288908 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.289584 kubelet[2601]: E1112 20:49:02.289469 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.289584 kubelet[2601]: W1112 20:49:02.289489 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.289584 kubelet[2601]: E1112 20:49:02.289514 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.290561 kubelet[2601]: E1112 20:49:02.290236 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.290561 kubelet[2601]: W1112 20:49:02.290256 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.290561 kubelet[2601]: E1112 20:49:02.290282 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.291811 kubelet[2601]: E1112 20:49:02.291605 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.291811 kubelet[2601]: W1112 20:49:02.291627 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.291811 kubelet[2601]: E1112 20:49:02.291656 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.292522 kubelet[2601]: E1112 20:49:02.292369 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.292522 kubelet[2601]: W1112 20:49:02.292390 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.292522 kubelet[2601]: E1112 20:49:02.292418 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.319514 kubelet[2601]: E1112 20:49:02.317559 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.323236 kubelet[2601]: W1112 20:49:02.319460 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.323236 kubelet[2601]: E1112 20:49:02.321050 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.323236 kubelet[2601]: I1112 20:49:02.322927 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfc23e74-373d-4561-98be-87343fb2a0fb-kubelet-dir\") pod \"csi-node-driver-8h2nz\" (UID: \"cfc23e74-373d-4561-98be-87343fb2a0fb\") " pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:02.324398 kubelet[2601]: E1112 20:49:02.324056 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.324398 kubelet[2601]: W1112 20:49:02.324113 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.324398 kubelet[2601]: E1112 20:49:02.324192 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.326138 kubelet[2601]: E1112 20:49:02.326079 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.326595 kubelet[2601]: W1112 20:49:02.326212 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.326983 kubelet[2601]: E1112 20:49:02.326730 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.328649 kubelet[2601]: E1112 20:49:02.327556 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.328649 kubelet[2601]: W1112 20:49:02.327588 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.330201 kubelet[2601]: E1112 20:49:02.328984 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.330201 kubelet[2601]: I1112 20:49:02.329862 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cfc23e74-373d-4561-98be-87343fb2a0fb-registration-dir\") pod \"csi-node-driver-8h2nz\" (UID: \"cfc23e74-373d-4561-98be-87343fb2a0fb\") " pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:02.333090 kubelet[2601]: E1112 20:49:02.332341 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:02.334480 containerd[1455]: time="2024-11-12T20:49:02.334421747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-px675,Uid:473c4e8c-8197-416d-85b3-2caf7b39c20f,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:02.335834 kubelet[2601]: E1112 20:49:02.334812 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.335834 kubelet[2601]: W1112 20:49:02.334895 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.335834 kubelet[2601]: E1112 20:49:02.334937 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.335834 kubelet[2601]: I1112 20:49:02.335001 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cfc23e74-373d-4561-98be-87343fb2a0fb-varrun\") pod \"csi-node-driver-8h2nz\" (UID: \"cfc23e74-373d-4561-98be-87343fb2a0fb\") " pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:02.338992 kubelet[2601]: E1112 20:49:02.338683 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.338992 kubelet[2601]: W1112 20:49:02.338733 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.338992 kubelet[2601]: E1112 20:49:02.338772 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.338992 kubelet[2601]: I1112 20:49:02.338920 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cfc23e74-373d-4561-98be-87343fb2a0fb-socket-dir\") pod \"csi-node-driver-8h2nz\" (UID: \"cfc23e74-373d-4561-98be-87343fb2a0fb\") " pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:02.342335 kubelet[2601]: E1112 20:49:02.341604 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.343210 kubelet[2601]: W1112 20:49:02.342043 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.343472 kubelet[2601]: E1112 20:49:02.343259 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.347775 kubelet[2601]: E1112 20:49:02.344456 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.347775 kubelet[2601]: W1112 20:49:02.347542 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.347775 kubelet[2601]: E1112 20:49:02.347676 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.350809 kubelet[2601]: E1112 20:49:02.350635 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.350809 kubelet[2601]: W1112 20:49:02.350671 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.351693 kubelet[2601]: E1112 20:49:02.351048 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.356816 kubelet[2601]: E1112 20:49:02.355969 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.356816 kubelet[2601]: W1112 20:49:02.355998 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.356816 kubelet[2601]: E1112 20:49:02.356702 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.356816 kubelet[2601]: I1112 20:49:02.356777 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qmb\" (UniqueName: \"kubernetes.io/projected/cfc23e74-373d-4561-98be-87343fb2a0fb-kube-api-access-57qmb\") pod \"csi-node-driver-8h2nz\" (UID: \"cfc23e74-373d-4561-98be-87343fb2a0fb\") " pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:02.357588 kubelet[2601]: E1112 20:49:02.357561 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.357588 kubelet[2601]: W1112 20:49:02.357586 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.357810 kubelet[2601]: E1112 20:49:02.357771 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.363997 kubelet[2601]: E1112 20:49:02.360811 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.363997 kubelet[2601]: W1112 20:49:02.360981 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.363997 kubelet[2601]: E1112 20:49:02.361037 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.363997 kubelet[2601]: E1112 20:49:02.362979 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.363997 kubelet[2601]: W1112 20:49:02.363007 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.363997 kubelet[2601]: E1112 20:49:02.363058 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.367641 kubelet[2601]: E1112 20:49:02.367591 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.367641 kubelet[2601]: W1112 20:49:02.367630 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.368325 kubelet[2601]: E1112 20:49:02.367668 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.370303 kubelet[2601]: E1112 20:49:02.370110 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.370303 kubelet[2601]: W1112 20:49:02.370145 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.370303 kubelet[2601]: E1112 20:49:02.370186 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.431661 containerd[1455]: time="2024-11-12T20:49:02.430627962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:02.431661 containerd[1455]: time="2024-11-12T20:49:02.430874325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:02.431661 containerd[1455]: time="2024-11-12T20:49:02.430903900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:02.431661 containerd[1455]: time="2024-11-12T20:49:02.431055644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:02.472926 kubelet[2601]: E1112 20:49:02.472445 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.472926 kubelet[2601]: W1112 20:49:02.472503 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.472926 kubelet[2601]: E1112 20:49:02.472541 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.479322 kubelet[2601]: E1112 20:49:02.478212 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.479322 kubelet[2601]: W1112 20:49:02.478697 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.479322 kubelet[2601]: E1112 20:49:02.478776 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.480126 kubelet[2601]: E1112 20:49:02.479816 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.480126 kubelet[2601]: W1112 20:49:02.479889 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.480126 kubelet[2601]: E1112 20:49:02.479920 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.481689 kubelet[2601]: E1112 20:49:02.481472 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.481689 kubelet[2601]: W1112 20:49:02.481518 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.481689 kubelet[2601]: E1112 20:49:02.481556 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.482410 kubelet[2601]: E1112 20:49:02.482217 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.482410 kubelet[2601]: W1112 20:49:02.482258 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.484332 kubelet[2601]: E1112 20:49:02.483920 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.484332 kubelet[2601]: E1112 20:49:02.483837 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.484332 kubelet[2601]: W1112 20:49:02.484100 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.484332 kubelet[2601]: E1112 20:49:02.484173 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.486647 kubelet[2601]: E1112 20:49:02.486311 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.486647 kubelet[2601]: W1112 20:49:02.486339 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.485423 systemd[1]: Started cri-containerd-764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79.scope - libcontainer container 764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79. Nov 12 20:49:02.488063 kubelet[2601]: E1112 20:49:02.487683 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.490729 kubelet[2601]: E1112 20:49:02.489599 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.490729 kubelet[2601]: W1112 20:49:02.489752 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.491218 kubelet[2601]: E1112 20:49:02.490746 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.492344 kubelet[2601]: E1112 20:49:02.492088 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.492344 kubelet[2601]: W1112 20:49:02.492162 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.494926 kubelet[2601]: E1112 20:49:02.494895 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.495803 kubelet[2601]: W1112 20:49:02.495740 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.496542 kubelet[2601]: E1112 20:49:02.495606 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.496542 kubelet[2601]: E1112 20:49:02.496471 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.506494 kubelet[2601]: E1112 20:49:02.506448 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.506494 kubelet[2601]: W1112 20:49:02.506546 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.507919 kubelet[2601]: E1112 20:49:02.506591 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.509559 kubelet[2601]: E1112 20:49:02.509480 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.509867 kubelet[2601]: W1112 20:49:02.509793 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.510112 kubelet[2601]: E1112 20:49:02.510036 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.510813 kubelet[2601]: E1112 20:49:02.510645 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.510813 kubelet[2601]: W1112 20:49:02.510668 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.511676 kubelet[2601]: E1112 20:49:02.511417 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.511967 kubelet[2601]: E1112 20:49:02.511508 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.511967 kubelet[2601]: W1112 20:49:02.511783 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.512250 kubelet[2601]: E1112 20:49:02.512229 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.514417 kubelet[2601]: E1112 20:49:02.514365 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.515263 kubelet[2601]: W1112 20:49:02.514885 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.515410 kubelet[2601]: E1112 20:49:02.515311 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.516193 kubelet[2601]: E1112 20:49:02.515941 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.516193 kubelet[2601]: W1112 20:49:02.515986 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.516433 kubelet[2601]: E1112 20:49:02.516400 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.518408 kubelet[2601]: E1112 20:49:02.518196 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.518408 kubelet[2601]: W1112 20:49:02.518229 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.518696 kubelet[2601]: E1112 20:49:02.518659 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.521115 kubelet[2601]: E1112 20:49:02.521005 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.521115 kubelet[2601]: W1112 20:49:02.521068 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.522116 kubelet[2601]: E1112 20:49:02.521817 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.522540 kubelet[2601]: E1112 20:49:02.522345 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.522540 kubelet[2601]: W1112 20:49:02.522364 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.522540 kubelet[2601]: E1112 20:49:02.522429 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.523143 kubelet[2601]: E1112 20:49:02.523096 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.523296 kubelet[2601]: W1112 20:49:02.523272 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.523530 kubelet[2601]: E1112 20:49:02.523424 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.523950 kubelet[2601]: E1112 20:49:02.523893 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.523950 kubelet[2601]: W1112 20:49:02.523916 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.524651 kubelet[2601]: E1112 20:49:02.524466 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.526377 kubelet[2601]: E1112 20:49:02.526158 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.526377 kubelet[2601]: W1112 20:49:02.526193 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.526377 kubelet[2601]: E1112 20:49:02.526258 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.526743 kubelet[2601]: E1112 20:49:02.526724 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.526896 kubelet[2601]: W1112 20:49:02.526799 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.527518 kubelet[2601]: E1112 20:49:02.527425 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.527995 kubelet[2601]: E1112 20:49:02.527494 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.527995 kubelet[2601]: W1112 20:49:02.527730 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.528419 kubelet[2601]: E1112 20:49:02.528214 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.528830 kubelet[2601]: E1112 20:49:02.528772 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.528830 kubelet[2601]: W1112 20:49:02.528794 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.529342 kubelet[2601]: E1112 20:49:02.528987 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.622180 kubelet[2601]: E1112 20:49:02.622001 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.622180 kubelet[2601]: W1112 20:49:02.622043 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.622180 kubelet[2601]: E1112 20:49:02.622082 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.677047 kubelet[2601]: E1112 20:49:02.676999 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:02.677047 kubelet[2601]: W1112 20:49:02.677033 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:02.677047 kubelet[2601]: E1112 20:49:02.677070 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:02.802715 containerd[1455]: time="2024-11-12T20:49:02.801795270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6fcff85c-cgqwd,Uid:941058f4-ba40-4f67-9d6a-d6d602b479fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"304bf7fa8855748b838db2f319335528079f97960fbe964e57523bc36df0cbfa\"" Nov 12 20:49:02.807959 kubelet[2601]: E1112 20:49:02.807918 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:02.815996 containerd[1455]: time="2024-11-12T20:49:02.815593590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:49:02.839736 containerd[1455]: time="2024-11-12T20:49:02.839640311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-px675,Uid:473c4e8c-8197-416d-85b3-2caf7b39c20f,Namespace:calico-system,Attempt:0,} returns sandbox id \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\"" Nov 12 20:49:02.842997 kubelet[2601]: E1112 20:49:02.842949 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:04.200022 kubelet[2601]: E1112 20:49:04.199083 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:06.035129 containerd[1455]: time="2024-11-12T20:49:06.034197502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.038942 containerd[1455]: time="2024-11-12T20:49:06.038586472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:49:06.041528 containerd[1455]: time="2024-11-12T20:49:06.041345731Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.043987 containerd[1455]: time="2024-11-12T20:49:06.043936932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.047201 containerd[1455]: time="2024-11-12T20:49:06.046880860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 3.231184905s" Nov 12 20:49:06.047201 containerd[1455]: time="2024-11-12T20:49:06.046981136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:49:06.050039 containerd[1455]: time="2024-11-12T20:49:06.049565731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:49:06.089910 containerd[1455]: time="2024-11-12T20:49:06.089074121Z" level=info msg="CreateContainer within sandbox \"304bf7fa8855748b838db2f319335528079f97960fbe964e57523bc36df0cbfa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:49:06.114912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017121955.mount: Deactivated successfully. Nov 12 20:49:06.146306 containerd[1455]: time="2024-11-12T20:49:06.146119857Z" level=info msg="CreateContainer within sandbox \"304bf7fa8855748b838db2f319335528079f97960fbe964e57523bc36df0cbfa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa1385a09d7590057b918c36eb7f6414e922f7bb81cb2d0c767bc217dc1f7eae\"" Nov 12 20:49:06.151687 containerd[1455]: time="2024-11-12T20:49:06.150042216Z" level=info msg="StartContainer for \"fa1385a09d7590057b918c36eb7f6414e922f7bb81cb2d0c767bc217dc1f7eae\"" Nov 12 20:49:06.198812 kubelet[2601]: E1112 20:49:06.198752 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:06.231651 systemd[1]: Started cri-containerd-fa1385a09d7590057b918c36eb7f6414e922f7bb81cb2d0c767bc217dc1f7eae.scope - libcontainer container fa1385a09d7590057b918c36eb7f6414e922f7bb81cb2d0c767bc217dc1f7eae. Nov 12 20:49:06.361187 containerd[1455]: time="2024-11-12T20:49:06.360911038Z" level=info msg="StartContainer for \"fa1385a09d7590057b918c36eb7f6414e922f7bb81cb2d0c767bc217dc1f7eae\" returns successfully" Nov 12 20:49:06.550905 kubelet[2601]: E1112 20:49:06.549643 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.582297 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.584576 kubelet[2601]: W1112 20:49:06.582335 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.582394 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.583179 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.584576 kubelet[2601]: W1112 20:49:06.583207 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.583233 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.584240 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.584576 kubelet[2601]: W1112 20:49:06.584265 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.584576 kubelet[2601]: E1112 20:49:06.584299 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.585021 kubelet[2601]: E1112 20:49:06.584936 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.585021 kubelet[2601]: W1112 20:49:06.584956 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.585021 kubelet[2601]: E1112 20:49:06.584983 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.585652 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.589286 kubelet[2601]: W1112 20:49:06.585698 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.585723 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.586222 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.589286 kubelet[2601]: W1112 20:49:06.586259 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.586282 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.586664 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.589286 kubelet[2601]: W1112 20:49:06.586678 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.589286 kubelet[2601]: E1112 20:49:06.586699 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.590694 kubelet[2601]: E1112 20:49:06.590650 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.590694 kubelet[2601]: W1112 20:49:06.590688 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.590919 kubelet[2601]: E1112 20:49:06.590728 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.592150 kubelet[2601]: E1112 20:49:06.592106 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.592150 kubelet[2601]: W1112 20:49:06.592139 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.592323 kubelet[2601]: E1112 20:49:06.592174 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.593619 kubelet[2601]: E1112 20:49:06.593585 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.593619 kubelet[2601]: W1112 20:49:06.593617 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.593782 kubelet[2601]: E1112 20:49:06.593650 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.594981 kubelet[2601]: E1112 20:49:06.594941 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.594981 kubelet[2601]: W1112 20:49:06.594975 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.595272 kubelet[2601]: E1112 20:49:06.595010 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.597566 kubelet[2601]: E1112 20:49:06.597186 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.597566 kubelet[2601]: W1112 20:49:06.597466 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.597566 kubelet[2601]: E1112 20:49:06.597512 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.598484 kubelet[2601]: E1112 20:49:06.598235 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.598484 kubelet[2601]: W1112 20:49:06.598259 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.598484 kubelet[2601]: E1112 20:49:06.598286 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.598696 kubelet[2601]: E1112 20:49:06.598619 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.598696 kubelet[2601]: W1112 20:49:06.598634 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.598696 kubelet[2601]: E1112 20:49:06.598665 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.598924 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.600208 kubelet[2601]: W1112 20:49:06.598933 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.598983 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.599240 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.600208 kubelet[2601]: W1112 20:49:06.599248 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.599260 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.599521 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.600208 kubelet[2601]: W1112 20:49:06.599531 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.599543 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.600208 kubelet[2601]: E1112 20:49:06.599795 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.600703 kubelet[2601]: W1112 20:49:06.599805 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.600703 kubelet[2601]: E1112 20:49:06.599816 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.600703 kubelet[2601]: E1112 20:49:06.600352 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.600703 kubelet[2601]: W1112 20:49:06.600365 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.600703 kubelet[2601]: E1112 20:49:06.600386 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.601047 kubelet[2601]: E1112 20:49:06.600739 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.601047 kubelet[2601]: W1112 20:49:06.600750 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.601047 kubelet[2601]: E1112 20:49:06.600771 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601070 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.603531 kubelet[2601]: W1112 20:49:06.601079 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601126 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601367 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.603531 kubelet[2601]: W1112 20:49:06.601377 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601390 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601802 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.603531 kubelet[2601]: W1112 20:49:06.601811 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.601823 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.603531 kubelet[2601]: E1112 20:49:06.602071 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605243 kubelet[2601]: W1112 20:49:06.602078 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.602095 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.602230 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605243 kubelet[2601]: W1112 20:49:06.602241 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.602251 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.602380 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605243 kubelet[2601]: W1112 20:49:06.602386 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.602396 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605243 kubelet[2601]: E1112 20:49:06.603041 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605243 kubelet[2601]: W1112 20:49:06.603052 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.603066 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604111 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605781 kubelet[2601]: W1112 20:49:06.604132 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604157 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604437 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605781 kubelet[2601]: W1112 20:49:06.604447 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604537 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604805 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.605781 kubelet[2601]: W1112 20:49:06.604822 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.605781 kubelet[2601]: E1112 20:49:06.604853 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.606317 kubelet[2601]: E1112 20:49:06.605090 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.606317 kubelet[2601]: W1112 20:49:06.605098 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.606317 kubelet[2601]: E1112 20:49:06.605108 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.606317 kubelet[2601]: E1112 20:49:06.605280 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.606317 kubelet[2601]: W1112 20:49:06.605287 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.606317 kubelet[2601]: E1112 20:49:06.605311 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:06.612003 kubelet[2601]: E1112 20:49:06.610124 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:06.612003 kubelet[2601]: W1112 20:49:06.610165 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:06.612003 kubelet[2601]: E1112 20:49:06.610201 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.556277 kubelet[2601]: I1112 20:49:07.555620 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:07.562357 kubelet[2601]: E1112 20:49:07.561011 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:07.620234 kubelet[2601]: E1112 20:49:07.619877 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.620234 kubelet[2601]: W1112 20:49:07.619914 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.620234 kubelet[2601]: E1112 20:49:07.619948 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.624788 kubelet[2601]: E1112 20:49:07.623733 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.624788 kubelet[2601]: W1112 20:49:07.623773 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.625273 kubelet[2601]: E1112 20:49:07.624739 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.632251 kubelet[2601]: E1112 20:49:07.631490 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.632251 kubelet[2601]: W1112 20:49:07.631535 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.632251 kubelet[2601]: E1112 20:49:07.631585 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.633499 kubelet[2601]: E1112 20:49:07.632959 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.633499 kubelet[2601]: W1112 20:49:07.632995 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.633499 kubelet[2601]: E1112 20:49:07.633032 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.634634 kubelet[2601]: E1112 20:49:07.634342 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.634634 kubelet[2601]: W1112 20:49:07.634366 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.634634 kubelet[2601]: E1112 20:49:07.634393 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.635344 kubelet[2601]: E1112 20:49:07.635095 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.635344 kubelet[2601]: W1112 20:49:07.635114 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.635344 kubelet[2601]: E1112 20:49:07.635135 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.636346 kubelet[2601]: E1112 20:49:07.635981 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.636346 kubelet[2601]: W1112 20:49:07.635997 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.636346 kubelet[2601]: E1112 20:49:07.636019 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.637460 kubelet[2601]: E1112 20:49:07.637264 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.637460 kubelet[2601]: W1112 20:49:07.637281 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.637460 kubelet[2601]: E1112 20:49:07.637302 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.638037 kubelet[2601]: E1112 20:49:07.637691 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.638037 kubelet[2601]: W1112 20:49:07.637707 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.638037 kubelet[2601]: E1112 20:49:07.637729 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.639625 kubelet[2601]: E1112 20:49:07.639204 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.639625 kubelet[2601]: W1112 20:49:07.639225 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.639625 kubelet[2601]: E1112 20:49:07.639246 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.642249 kubelet[2601]: E1112 20:49:07.641877 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.642249 kubelet[2601]: W1112 20:49:07.641909 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.642249 kubelet[2601]: E1112 20:49:07.641943 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.643822 kubelet[2601]: E1112 20:49:07.643441 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.643822 kubelet[2601]: W1112 20:49:07.643470 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.643822 kubelet[2601]: E1112 20:49:07.643508 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.645396 kubelet[2601]: E1112 20:49:07.645128 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.645396 kubelet[2601]: W1112 20:49:07.645157 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.645396 kubelet[2601]: E1112 20:49:07.645188 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.651040 kubelet[2601]: E1112 20:49:07.650226 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.651040 kubelet[2601]: W1112 20:49:07.650927 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.651040 kubelet[2601]: E1112 20:49:07.650987 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.652724 kubelet[2601]: E1112 20:49:07.652686 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.652724 kubelet[2601]: W1112 20:49:07.652718 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.653238 kubelet[2601]: E1112 20:49:07.652755 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.654550 kubelet[2601]: E1112 20:49:07.654159 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.654550 kubelet[2601]: W1112 20:49:07.654188 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.654550 kubelet[2601]: E1112 20:49:07.654225 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.654751 kubelet[2601]: E1112 20:49:07.654688 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.654751 kubelet[2601]: W1112 20:49:07.654705 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.654751 kubelet[2601]: E1112 20:49:07.654733 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.655399 kubelet[2601]: E1112 20:49:07.655010 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.655399 kubelet[2601]: W1112 20:49:07.655022 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.655399 kubelet[2601]: E1112 20:49:07.655044 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.655571 kubelet[2601]: E1112 20:49:07.655519 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.655571 kubelet[2601]: W1112 20:49:07.655533 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.655571 kubelet[2601]: E1112 20:49:07.655554 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.661889 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.669751 kubelet[2601]: W1112 20:49:07.661982 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.662106 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.663416 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.669751 kubelet[2601]: W1112 20:49:07.663439 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.663472 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.665212 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.669751 kubelet[2601]: W1112 20:49:07.665236 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.665271 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.669751 kubelet[2601]: E1112 20:49:07.666909 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.670661 kubelet[2601]: W1112 20:49:07.666931 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.667391 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.668203 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.670661 kubelet[2601]: W1112 20:49:07.668222 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.668249 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.668566 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.670661 kubelet[2601]: W1112 20:49:07.668578 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.668598 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.670661 kubelet[2601]: E1112 20:49:07.670346 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.670661 kubelet[2601]: W1112 20:49:07.670418 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.672101 kubelet[2601]: E1112 20:49:07.670804 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.672101 kubelet[2601]: E1112 20:49:07.671958 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.672101 kubelet[2601]: W1112 20:49:07.671978 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.673076 kubelet[2601]: E1112 20:49:07.672255 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.673571 kubelet[2601]: E1112 20:49:07.673364 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.673571 kubelet[2601]: W1112 20:49:07.673387 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.673571 kubelet[2601]: E1112 20:49:07.673424 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.674032 kubelet[2601]: E1112 20:49:07.673749 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.674032 kubelet[2601]: W1112 20:49:07.673769 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.674032 kubelet[2601]: E1112 20:49:07.673987 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.674270 kubelet[2601]: E1112 20:49:07.674064 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.674270 kubelet[2601]: W1112 20:49:07.674075 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.674270 kubelet[2601]: E1112 20:49:07.674099 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.675943 kubelet[2601]: E1112 20:49:07.675338 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.675943 kubelet[2601]: W1112 20:49:07.675361 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.675943 kubelet[2601]: E1112 20:49:07.675384 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.675943 kubelet[2601]: E1112 20:49:07.675679 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.675943 kubelet[2601]: W1112 20:49:07.675691 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.675943 kubelet[2601]: E1112 20:49:07.675709 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.676991 kubelet[2601]: E1112 20:49:07.676608 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:07.676991 kubelet[2601]: W1112 20:49:07.676628 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:07.676991 kubelet[2601]: E1112 20:49:07.676649 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:07.871443 containerd[1455]: time="2024-11-12T20:49:07.869672581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:07.926746 containerd[1455]: time="2024-11-12T20:49:07.926636372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:49:07.929559 containerd[1455]: time="2024-11-12T20:49:07.929368669Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:07.938675 containerd[1455]: time="2024-11-12T20:49:07.938610962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:07.941317 containerd[1455]: time="2024-11-12T20:49:07.941233796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.891596157s" Nov 12 20:49:07.941568 containerd[1455]: time="2024-11-12T20:49:07.941538067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:49:07.949003 containerd[1455]: time="2024-11-12T20:49:07.948642971Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:49:08.019383 containerd[1455]: time="2024-11-12T20:49:08.019274498Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511\"" Nov 12 20:49:08.021141 containerd[1455]: time="2024-11-12T20:49:08.020656214Z" level=info msg="StartContainer for \"0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511\"" Nov 12 20:49:08.089121 systemd[1]: run-containerd-runc-k8s.io-0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511-runc.1ipHA3.mount: Deactivated successfully. Nov 12 20:49:08.102199 systemd[1]: Started cri-containerd-0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511.scope - libcontainer container 0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511. Nov 12 20:49:08.204358 kubelet[2601]: E1112 20:49:08.199154 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:08.266458 containerd[1455]: time="2024-11-12T20:49:08.265241777Z" level=info msg="StartContainer for \"0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511\" returns successfully" Nov 12 20:49:08.277224 systemd[1]: cri-containerd-0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511.scope: Deactivated successfully. Nov 12 20:49:08.355265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511-rootfs.mount: Deactivated successfully. Nov 12 20:49:08.400937 containerd[1455]: time="2024-11-12T20:49:08.369281438Z" level=info msg="shim disconnected" id=0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511 namespace=k8s.io Nov 12 20:49:08.401576 containerd[1455]: time="2024-11-12T20:49:08.401263889Z" level=warning msg="cleaning up after shim disconnected" id=0d0b7e38cb7e4cc75ece53287a5b98207bbc3bfd1aa3c430a41c6d0c4e202511 namespace=k8s.io Nov 12 20:49:08.401576 containerd[1455]: time="2024-11-12T20:49:08.401301358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:08.571859 kubelet[2601]: E1112 20:49:08.571674 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:08.580939 containerd[1455]: time="2024-11-12T20:49:08.577448968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:49:08.640897 kubelet[2601]: I1112 20:49:08.640345 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6d6fcff85c-cgqwd" podStartSLOduration=4.40492571 podStartE2EDuration="7.64028185s" podCreationTimestamp="2024-11-12 20:49:01 +0000 UTC" firstStartedPulling="2024-11-12 20:49:02.81322125 +0000 UTC m=+23.914210206" lastFinishedPulling="2024-11-12 20:49:06.048577414 +0000 UTC m=+27.149566346" observedRunningTime="2024-11-12 20:49:06.591923928 +0000 UTC m=+27.692912883" watchObservedRunningTime="2024-11-12 20:49:08.64028185 +0000 UTC m=+29.741270818" Nov 12 20:49:10.199138 kubelet[2601]: E1112 20:49:10.198604 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:10.761728 kubelet[2601]: I1112 20:49:10.761661 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:10.762898 kubelet[2601]: E1112 20:49:10.762833 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:11.616197 kubelet[2601]: E1112 20:49:11.612973 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:12.200066 kubelet[2601]: E1112 20:49:12.199723 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:14.200799 kubelet[2601]: E1112 20:49:14.200318 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:15.538923 containerd[1455]: time="2024-11-12T20:49:15.537667549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.540635 containerd[1455]: time="2024-11-12T20:49:15.540071901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:49:15.543892 containerd[1455]: time="2024-11-12T20:49:15.542796781Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.551897 containerd[1455]: time="2024-11-12T20:49:15.551069687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.552327 containerd[1455]: time="2024-11-12T20:49:15.552236149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 6.97468654s" Nov 12 20:49:15.552327 containerd[1455]: time="2024-11-12T20:49:15.552322905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:49:15.577877 containerd[1455]: time="2024-11-12T20:49:15.575309525Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:49:15.607699 containerd[1455]: time="2024-11-12T20:49:15.607563243Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa\"" Nov 12 20:49:15.610928 containerd[1455]: time="2024-11-12T20:49:15.609468800Z" level=info msg="StartContainer for \"0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa\"" Nov 12 20:49:15.748516 systemd[1]: Started cri-containerd-0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa.scope - libcontainer container 0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa. Nov 12 20:49:15.818455 containerd[1455]: time="2024-11-12T20:49:15.817475547Z" level=info msg="StartContainer for \"0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa\" returns successfully" Nov 12 20:49:16.199747 kubelet[2601]: E1112 20:49:16.199046 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:16.648801 kubelet[2601]: E1112 20:49:16.648706 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:16.730572 systemd[1]: cri-containerd-0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa.scope: Deactivated successfully. Nov 12 20:49:16.783608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa-rootfs.mount: Deactivated successfully. Nov 12 20:49:16.793103 containerd[1455]: time="2024-11-12T20:49:16.793010739Z" level=info msg="shim disconnected" id=0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa namespace=k8s.io Nov 12 20:49:16.796991 containerd[1455]: time="2024-11-12T20:49:16.795373845Z" level=warning msg="cleaning up after shim disconnected" id=0673e07b0dddd60b932574327d1bdf0038b1342984fcc520bc182ec65e5dddaa namespace=k8s.io Nov 12 20:49:16.796991 containerd[1455]: time="2024-11-12T20:49:16.795430748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:16.841775 kubelet[2601]: I1112 20:49:16.841744 2601 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:49:16.887245 kubelet[2601]: I1112 20:49:16.886817 2601 topology_manager.go:215] "Topology Admit Handler" podUID="4008f795-fc1f-4445-8e95-1bef6a854734" podNamespace="kube-system" podName="coredns-76f75df574-s65ff" Nov 12 20:49:16.890638 kubelet[2601]: I1112 20:49:16.890560 2601 topology_manager.go:215] "Topology Admit Handler" podUID="7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c" podNamespace="kube-system" podName="coredns-76f75df574-92hnz" Nov 12 20:49:16.900299 kubelet[2601]: I1112 20:49:16.900007 2601 topology_manager.go:215] "Topology Admit Handler" podUID="070859a9-c4e7-4be8-9722-144e7da7cafe" podNamespace="calico-apiserver" podName="calico-apiserver-b9cd6c9fd-lpw7d" Nov 12 20:49:16.910260 systemd[1]: Created slice kubepods-burstable-pod4008f795_fc1f_4445_8e95_1bef6a854734.slice - libcontainer container kubepods-burstable-pod4008f795_fc1f_4445_8e95_1bef6a854734.slice. Nov 12 20:49:16.925271 kubelet[2601]: I1112 20:49:16.925183 2601 topology_manager.go:215] "Topology Admit Handler" podUID="ab892f57-dd73-4a88-b362-a0a22a8db051" podNamespace="calico-system" podName="calico-kube-controllers-5947c459c4-p9zc7" Nov 12 20:49:16.935792 systemd[1]: Created slice kubepods-burstable-pod7fbc16cd_09b0_457a_a2a1_8f4ff6bb628c.slice - libcontainer container kubepods-burstable-pod7fbc16cd_09b0_457a_a2a1_8f4ff6bb628c.slice. Nov 12 20:49:16.954364 kubelet[2601]: I1112 20:49:16.953398 2601 topology_manager.go:215] "Topology Admit Handler" podUID="90187418-37a7-49a8-afc5-d46af801e2a9" podNamespace="calico-apiserver" podName="calico-apiserver-b9cd6c9fd-zzw2n" Nov 12 20:49:16.958049 systemd[1]: Created slice kubepods-besteffort-pod070859a9_c4e7_4be8_9722_144e7da7cafe.slice - libcontainer container kubepods-besteffort-pod070859a9_c4e7_4be8_9722_144e7da7cafe.slice. Nov 12 20:49:16.980986 systemd[1]: Created slice kubepods-besteffort-podab892f57_dd73_4a88_b362_a0a22a8db051.slice - libcontainer container kubepods-besteffort-podab892f57_dd73_4a88_b362_a0a22a8db051.slice. Nov 12 20:49:16.988257 kubelet[2601]: I1112 20:49:16.987526 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab892f57-dd73-4a88-b362-a0a22a8db051-tigera-ca-bundle\") pod \"calico-kube-controllers-5947c459c4-p9zc7\" (UID: \"ab892f57-dd73-4a88-b362-a0a22a8db051\") " pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" Nov 12 20:49:16.990891 kubelet[2601]: I1112 20:49:16.990007 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q72b\" (UniqueName: \"kubernetes.io/projected/070859a9-c4e7-4be8-9722-144e7da7cafe-kube-api-access-7q72b\") pod \"calico-apiserver-b9cd6c9fd-lpw7d\" (UID: \"070859a9-c4e7-4be8-9722-144e7da7cafe\") " pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" Nov 12 20:49:16.990891 kubelet[2601]: I1112 20:49:16.990088 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/070859a9-c4e7-4be8-9722-144e7da7cafe-calico-apiserver-certs\") pod \"calico-apiserver-b9cd6c9fd-lpw7d\" (UID: \"070859a9-c4e7-4be8-9722-144e7da7cafe\") " pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" Nov 12 20:49:16.990891 kubelet[2601]: I1112 20:49:16.990133 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c-config-volume\") pod \"coredns-76f75df574-92hnz\" (UID: \"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c\") " pod="kube-system/coredns-76f75df574-92hnz" Nov 12 20:49:16.990891 kubelet[2601]: I1112 20:49:16.990175 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4008f795-fc1f-4445-8e95-1bef6a854734-config-volume\") pod \"coredns-76f75df574-s65ff\" (UID: \"4008f795-fc1f-4445-8e95-1bef6a854734\") " pod="kube-system/coredns-76f75df574-s65ff" Nov 12 20:49:16.990891 kubelet[2601]: I1112 20:49:16.990220 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90187418-37a7-49a8-afc5-d46af801e2a9-calico-apiserver-certs\") pod \"calico-apiserver-b9cd6c9fd-zzw2n\" (UID: \"90187418-37a7-49a8-afc5-d46af801e2a9\") " pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" Nov 12 20:49:16.991294 kubelet[2601]: I1112 20:49:16.990263 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tvd7\" (UniqueName: \"kubernetes.io/projected/90187418-37a7-49a8-afc5-d46af801e2a9-kube-api-access-6tvd7\") pod \"calico-apiserver-b9cd6c9fd-zzw2n\" (UID: \"90187418-37a7-49a8-afc5-d46af801e2a9\") " pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" Nov 12 20:49:16.991294 kubelet[2601]: I1112 20:49:16.990309 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5wn\" (UniqueName: \"kubernetes.io/projected/ab892f57-dd73-4a88-b362-a0a22a8db051-kube-api-access-4b5wn\") pod \"calico-kube-controllers-5947c459c4-p9zc7\" (UID: \"ab892f57-dd73-4a88-b362-a0a22a8db051\") " pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" Nov 12 20:49:16.991294 kubelet[2601]: I1112 20:49:16.990362 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c-kube-api-access-qkdwl\") pod \"coredns-76f75df574-92hnz\" (UID: \"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c\") " pod="kube-system/coredns-76f75df574-92hnz" Nov 12 20:49:16.994405 kubelet[2601]: I1112 20:49:16.994149 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7jn\" (UniqueName: \"kubernetes.io/projected/4008f795-fc1f-4445-8e95-1bef6a854734-kube-api-access-5m7jn\") pod \"coredns-76f75df574-s65ff\" (UID: \"4008f795-fc1f-4445-8e95-1bef6a854734\") " pod="kube-system/coredns-76f75df574-s65ff" Nov 12 20:49:16.997814 systemd[1]: Created slice kubepods-besteffort-pod90187418_37a7_49a8_afc5_d46af801e2a9.slice - libcontainer container kubepods-besteffort-pod90187418_37a7_49a8_afc5_d46af801e2a9.slice. Nov 12 20:49:17.228897 kubelet[2601]: E1112 20:49:17.226027 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:17.232288 containerd[1455]: time="2024-11-12T20:49:17.231344340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s65ff,Uid:4008f795-fc1f-4445-8e95-1bef6a854734,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:17.249451 kubelet[2601]: E1112 20:49:17.247300 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:17.250620 containerd[1455]: time="2024-11-12T20:49:17.250153572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-92hnz,Uid:7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:17.272596 containerd[1455]: time="2024-11-12T20:49:17.272384479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-lpw7d,Uid:070859a9-c4e7-4be8-9722-144e7da7cafe,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:49:17.297194 containerd[1455]: time="2024-11-12T20:49:17.296898458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5947c459c4-p9zc7,Uid:ab892f57-dd73-4a88-b362-a0a22a8db051,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:17.313985 containerd[1455]: time="2024-11-12T20:49:17.313917662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-zzw2n,Uid:90187418-37a7-49a8-afc5-d46af801e2a9,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:49:17.661334 kubelet[2601]: E1112 20:49:17.660943 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:17.666384 containerd[1455]: time="2024-11-12T20:49:17.665768899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:49:17.853754 containerd[1455]: time="2024-11-12T20:49:17.853160375Z" level=error msg="Failed to destroy network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.860428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b-shm.mount: Deactivated successfully. Nov 12 20:49:17.867456 containerd[1455]: time="2024-11-12T20:49:17.866651283Z" level=error msg="encountered an error cleaning up failed sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.867456 containerd[1455]: time="2024-11-12T20:49:17.866817747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-zzw2n,Uid:90187418-37a7-49a8-afc5-d46af801e2a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.871667 kubelet[2601]: E1112 20:49:17.870164 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.871667 kubelet[2601]: E1112 20:49:17.870270 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" Nov 12 20:49:17.871667 kubelet[2601]: E1112 20:49:17.870296 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" Nov 12 20:49:17.872439 kubelet[2601]: E1112 20:49:17.871941 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9cd6c9fd-zzw2n_calico-apiserver(90187418-37a7-49a8-afc5-d46af801e2a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9cd6c9fd-zzw2n_calico-apiserver(90187418-37a7-49a8-afc5-d46af801e2a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" podUID="90187418-37a7-49a8-afc5-d46af801e2a9" Nov 12 20:49:17.882707 containerd[1455]: time="2024-11-12T20:49:17.880933374Z" level=error msg="Failed to destroy network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.882707 containerd[1455]: time="2024-11-12T20:49:17.882551881Z" level=error msg="Failed to destroy network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.883294 containerd[1455]: time="2024-11-12T20:49:17.883238674Z" level=error msg="encountered an error cleaning up failed sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.883383 containerd[1455]: time="2024-11-12T20:49:17.883340422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-lpw7d,Uid:070859a9-c4e7-4be8-9722-144e7da7cafe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.888009 kubelet[2601]: E1112 20:49:17.885110 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.888009 kubelet[2601]: E1112 20:49:17.885198 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" Nov 12 20:49:17.888009 kubelet[2601]: E1112 20:49:17.885237 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" Nov 12 20:49:17.887688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce-shm.mount: Deactivated successfully. Nov 12 20:49:17.888372 kubelet[2601]: E1112 20:49:17.885325 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9cd6c9fd-lpw7d_calico-apiserver(070859a9-c4e7-4be8-9722-144e7da7cafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9cd6c9fd-lpw7d_calico-apiserver(070859a9-c4e7-4be8-9722-144e7da7cafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" podUID="070859a9-c4e7-4be8-9722-144e7da7cafe" Nov 12 20:49:17.888803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950-shm.mount: Deactivated successfully. Nov 12 20:49:17.892466 containerd[1455]: time="2024-11-12T20:49:17.892325919Z" level=error msg="encountered an error cleaning up failed sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.892576 containerd[1455]: time="2024-11-12T20:49:17.892499076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-92hnz,Uid:7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.896974 kubelet[2601]: E1112 20:49:17.896235 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.896974 kubelet[2601]: E1112 20:49:17.896336 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-92hnz" Nov 12 20:49:17.896974 kubelet[2601]: E1112 20:49:17.896383 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-92hnz" Nov 12 20:49:17.898942 kubelet[2601]: E1112 20:49:17.898455 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-92hnz_kube-system(7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-92hnz_kube-system(7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-92hnz" podUID="7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c" Nov 12 20:49:17.905298 containerd[1455]: time="2024-11-12T20:49:17.905241318Z" level=error msg="Failed to destroy network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.907358 containerd[1455]: time="2024-11-12T20:49:17.907201426Z" level=error msg="encountered an error cleaning up failed sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.908130 containerd[1455]: time="2024-11-12T20:49:17.908071676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5947c459c4-p9zc7,Uid:ab892f57-dd73-4a88-b362-a0a22a8db051,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.910167 kubelet[2601]: E1112 20:49:17.910133 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.912386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a-shm.mount: Deactivated successfully. Nov 12 20:49:17.916724 kubelet[2601]: E1112 20:49:17.913871 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" Nov 12 20:49:17.916724 kubelet[2601]: E1112 20:49:17.913918 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" Nov 12 20:49:17.916724 kubelet[2601]: E1112 20:49:17.914068 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5947c459c4-p9zc7_calico-system(ab892f57-dd73-4a88-b362-a0a22a8db051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5947c459c4-p9zc7_calico-system(ab892f57-dd73-4a88-b362-a0a22a8db051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" podUID="ab892f57-dd73-4a88-b362-a0a22a8db051" Nov 12 20:49:17.918059 containerd[1455]: time="2024-11-12T20:49:17.917989208Z" level=error msg="Failed to destroy network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.919084 containerd[1455]: time="2024-11-12T20:49:17.919021802Z" level=error msg="encountered an error cleaning up failed sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.919217 containerd[1455]: time="2024-11-12T20:49:17.919114078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s65ff,Uid:4008f795-fc1f-4445-8e95-1bef6a854734,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.919708 kubelet[2601]: E1112 20:49:17.919439 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:17.919708 kubelet[2601]: E1112 20:49:17.919508 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s65ff" Nov 12 20:49:17.919708 kubelet[2601]: E1112 20:49:17.919556 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s65ff" Nov 12 20:49:17.919822 kubelet[2601]: E1112 20:49:17.919614 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s65ff_kube-system(4008f795-fc1f-4445-8e95-1bef6a854734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s65ff_kube-system(4008f795-fc1f-4445-8e95-1bef6a854734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s65ff" podUID="4008f795-fc1f-4445-8e95-1bef6a854734" Nov 12 20:49:18.214951 systemd[1]: Created slice kubepods-besteffort-podcfc23e74_373d_4561_98be_87343fb2a0fb.slice - libcontainer container kubepods-besteffort-podcfc23e74_373d_4561_98be_87343fb2a0fb.slice. Nov 12 20:49:18.221054 containerd[1455]: time="2024-11-12T20:49:18.220989755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8h2nz,Uid:cfc23e74-373d-4561-98be-87343fb2a0fb,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:18.389655 containerd[1455]: time="2024-11-12T20:49:18.389292433Z" level=error msg="Failed to destroy network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.390551 containerd[1455]: time="2024-11-12T20:49:18.390226340Z" level=error msg="encountered an error cleaning up failed sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.390551 containerd[1455]: time="2024-11-12T20:49:18.390350767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8h2nz,Uid:cfc23e74-373d-4561-98be-87343fb2a0fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.390800 kubelet[2601]: E1112 20:49:18.390750 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.391258 kubelet[2601]: E1112 20:49:18.390834 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:18.391258 kubelet[2601]: E1112 20:49:18.390914 2601 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8h2nz" Nov 12 20:49:18.391258 kubelet[2601]: E1112 20:49:18.391002 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8h2nz_calico-system(cfc23e74-373d-4561-98be-87343fb2a0fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8h2nz_calico-system(cfc23e74-373d-4561-98be-87343fb2a0fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:18.663158 kubelet[2601]: I1112 20:49:18.662830 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:18.668255 kubelet[2601]: I1112 20:49:18.667126 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:18.679294 containerd[1455]: time="2024-11-12T20:49:18.678331287Z" level=info msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" Nov 12 20:49:18.681020 containerd[1455]: time="2024-11-12T20:49:18.679362906Z" level=info msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" Nov 12 20:49:18.683616 containerd[1455]: time="2024-11-12T20:49:18.683372417Z" level=info msg="Ensure that sandbox 18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b in task-service has been cleanup successfully" Nov 12 20:49:18.684646 containerd[1455]: time="2024-11-12T20:49:18.684420140Z" level=info msg="Ensure that sandbox eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950 in task-service has been cleanup successfully" Nov 12 20:49:18.687370 kubelet[2601]: I1112 20:49:18.686178 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:18.691170 containerd[1455]: time="2024-11-12T20:49:18.690966306Z" level=info msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" Nov 12 20:49:18.691555 containerd[1455]: time="2024-11-12T20:49:18.691254956Z" level=info msg="Ensure that sandbox 120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a in task-service has been cleanup successfully" Nov 12 20:49:18.696276 kubelet[2601]: I1112 20:49:18.696221 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:18.698408 containerd[1455]: time="2024-11-12T20:49:18.697081149Z" level=info msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" Nov 12 20:49:18.714673 kubelet[2601]: I1112 20:49:18.714460 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:18.716758 containerd[1455]: time="2024-11-12T20:49:18.716264700Z" level=info msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" Nov 12 20:49:18.716758 containerd[1455]: time="2024-11-12T20:49:18.716471845Z" level=info msg="Ensure that sandbox 592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034 in task-service has been cleanup successfully" Nov 12 20:49:18.718926 containerd[1455]: time="2024-11-12T20:49:18.718725403Z" level=info msg="Ensure that sandbox 739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce in task-service has been cleanup successfully" Nov 12 20:49:18.732369 kubelet[2601]: I1112 20:49:18.732318 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:18.734567 containerd[1455]: time="2024-11-12T20:49:18.734027237Z" level=info msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" Nov 12 20:49:18.734567 containerd[1455]: time="2024-11-12T20:49:18.734296898Z" level=info msg="Ensure that sandbox 38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096 in task-service has been cleanup successfully" Nov 12 20:49:18.787005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096-shm.mount: Deactivated successfully. Nov 12 20:49:18.841118 containerd[1455]: time="2024-11-12T20:49:18.841035284Z" level=error msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" failed" error="failed to destroy network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.841517 kubelet[2601]: E1112 20:49:18.841404 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:18.841804 kubelet[2601]: E1112 20:49:18.841672 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034"} Nov 12 20:49:18.841804 kubelet[2601]: E1112 20:49:18.841758 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cfc23e74-373d-4561-98be-87343fb2a0fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.841965 kubelet[2601]: E1112 20:49:18.841859 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cfc23e74-373d-4561-98be-87343fb2a0fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8h2nz" podUID="cfc23e74-373d-4561-98be-87343fb2a0fb" Nov 12 20:49:18.856593 containerd[1455]: time="2024-11-12T20:49:18.856156002Z" level=error msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" failed" error="failed to destroy network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.857107 kubelet[2601]: E1112 20:49:18.856546 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:18.857107 kubelet[2601]: E1112 20:49:18.856697 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce"} Nov 12 20:49:18.857107 kubelet[2601]: E1112 20:49:18.856792 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"070859a9-c4e7-4be8-9722-144e7da7cafe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.857107 kubelet[2601]: E1112 20:49:18.856894 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"070859a9-c4e7-4be8-9722-144e7da7cafe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" podUID="070859a9-c4e7-4be8-9722-144e7da7cafe" Nov 12 20:49:18.911309 containerd[1455]: time="2024-11-12T20:49:18.911015818Z" level=error msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" failed" error="failed to destroy network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.912376 kubelet[2601]: E1112 20:49:18.912328 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:18.912780 kubelet[2601]: E1112 20:49:18.912754 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950"} Nov 12 20:49:18.913143 kubelet[2601]: E1112 20:49:18.912869 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.913143 kubelet[2601]: E1112 20:49:18.912921 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-92hnz" podUID="7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c" Nov 12 20:49:18.913393 containerd[1455]: time="2024-11-12T20:49:18.912915215Z" level=error msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" failed" error="failed to destroy network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.913393 containerd[1455]: time="2024-11-12T20:49:18.913023903Z" level=error msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" failed" error="failed to destroy network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.915266 kubelet[2601]: E1112 20:49:18.913347 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:18.915266 kubelet[2601]: E1112 20:49:18.915077 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b"} Nov 12 20:49:18.915266 kubelet[2601]: E1112 20:49:18.915161 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90187418-37a7-49a8-afc5-d46af801e2a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.915266 kubelet[2601]: E1112 20:49:18.915235 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90187418-37a7-49a8-afc5-d46af801e2a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" podUID="90187418-37a7-49a8-afc5-d46af801e2a9" Nov 12 20:49:18.916427 kubelet[2601]: E1112 20:49:18.916070 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:18.916427 kubelet[2601]: E1112 20:49:18.916119 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a"} Nov 12 20:49:18.916427 kubelet[2601]: E1112 20:49:18.916177 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab892f57-dd73-4a88-b362-a0a22a8db051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.916427 kubelet[2601]: E1112 20:49:18.916220 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab892f57-dd73-4a88-b362-a0a22a8db051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" podUID="ab892f57-dd73-4a88-b362-a0a22a8db051" Nov 12 20:49:18.917658 containerd[1455]: time="2024-11-12T20:49:18.917366402Z" level=error msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" failed" error="failed to destroy network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:18.917871 kubelet[2601]: E1112 20:49:18.917700 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:18.917871 kubelet[2601]: E1112 20:49:18.917749 2601 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096"} Nov 12 20:49:18.917871 kubelet[2601]: E1112 20:49:18.917804 2601 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4008f795-fc1f-4445-8e95-1bef6a854734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:18.917871 kubelet[2601]: E1112 20:49:18.917868 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4008f795-fc1f-4445-8e95-1bef6a854734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s65ff" podUID="4008f795-fc1f-4445-8e95-1bef6a854734" Nov 12 20:49:22.217098 systemd[1]: Started sshd@7-143.198.78.43:22-139.178.68.195:48596.service - OpenSSH per-connection server daemon (139.178.68.195:48596). Nov 12 20:49:22.424897 sshd[3686]: Accepted publickey for core from 139.178.68.195 port 48596 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:22.428650 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:22.447389 systemd-logind[1449]: New session 8 of user core. Nov 12 20:49:22.449601 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:49:22.778216 sshd[3686]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:22.790905 systemd[1]: sshd@7-143.198.78.43:22-139.178.68.195:48596.service: Deactivated successfully. Nov 12 20:49:22.795582 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:49:22.799837 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:49:22.802616 systemd-logind[1449]: Removed session 8. Nov 12 20:49:27.180664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457406026.mount: Deactivated successfully. Nov 12 20:49:27.288180 containerd[1455]: time="2024-11-12T20:49:27.280813011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:27.313633 containerd[1455]: time="2024-11-12T20:49:27.313497079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:49:27.348349 containerd[1455]: time="2024-11-12T20:49:27.347890842Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:27.363297 containerd[1455]: time="2024-11-12T20:49:27.363224486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:27.365199 containerd[1455]: time="2024-11-12T20:49:27.365058699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 9.699224599s" Nov 12 20:49:27.365199 containerd[1455]: time="2024-11-12T20:49:27.365179879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:49:27.543014 containerd[1455]: time="2024-11-12T20:49:27.542500225Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:49:27.670519 containerd[1455]: time="2024-11-12T20:49:27.670400788Z" level=info msg="CreateContainer within sandbox \"764e70facb1f32c1f567b338a3cc425072b9c41f11c795897bd2c8e61bfb0b79\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17\"" Nov 12 20:49:27.674923 containerd[1455]: time="2024-11-12T20:49:27.673661680Z" level=info msg="StartContainer for \"acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17\"" Nov 12 20:49:27.804290 systemd[1]: Started sshd@8-143.198.78.43:22-139.178.68.195:39350.service - OpenSSH per-connection server daemon (139.178.68.195:39350). Nov 12 20:49:27.959411 sshd[3711]: Accepted publickey for core from 139.178.68.195 port 39350 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:27.963355 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:27.968384 systemd[1]: Started cri-containerd-acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17.scope - libcontainer container acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17. Nov 12 20:49:27.990590 systemd-logind[1449]: New session 9 of user core. Nov 12 20:49:27.996367 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:49:28.114976 containerd[1455]: time="2024-11-12T20:49:28.114050175Z" level=info msg="StartContainer for \"acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17\" returns successfully" Nov 12 20:49:28.341938 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:49:28.344777 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:49:28.361553 sshd[3711]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:28.369283 systemd[1]: sshd@8-143.198.78.43:22-139.178.68.195:39350.service: Deactivated successfully. Nov 12 20:49:28.378509 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:49:28.386436 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:49:28.394354 systemd-logind[1449]: Removed session 9. Nov 12 20:49:28.791734 kubelet[2601]: E1112 20:49:28.791666 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:29.018537 systemd[1]: run-containerd-runc-k8s.io-acd04dbe1b882bf17b4252245afcb3f1b4d277ac76d86901cea0f9ef21060b17-runc.2Iy7ps.mount: Deactivated successfully. Nov 12 20:49:29.808011 kubelet[2601]: E1112 20:49:29.807804 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:30.205546 containerd[1455]: time="2024-11-12T20:49:30.203732132Z" level=info msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" Nov 12 20:49:30.403974 kubelet[2601]: I1112 20:49:30.403757 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-px675" podStartSLOduration=4.870343009 podStartE2EDuration="29.38298692s" podCreationTimestamp="2024-11-12 20:49:01 +0000 UTC" firstStartedPulling="2024-11-12 20:49:02.852915264 +0000 UTC m=+23.953904198" lastFinishedPulling="2024-11-12 20:49:27.365559174 +0000 UTC m=+48.466548109" observedRunningTime="2024-11-12 20:49:28.847420576 +0000 UTC m=+49.948409538" watchObservedRunningTime="2024-11-12 20:49:30.38298692 +0000 UTC m=+51.483975881" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.390 [INFO][3866] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.390 [INFO][3866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" iface="eth0" netns="/var/run/netns/cni-c47e19e5-5353-5c72-0789-9f5d048561a0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.391 [INFO][3866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" iface="eth0" netns="/var/run/netns/cni-c47e19e5-5353-5c72-0789-9f5d048561a0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.393 [INFO][3866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" iface="eth0" netns="/var/run/netns/cni-c47e19e5-5353-5c72-0789-9f5d048561a0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.393 [INFO][3866] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.393 [INFO][3866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.593 [INFO][3891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.595 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.595 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.619 [WARNING][3891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.619 [INFO][3891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.625 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:30.637526 containerd[1455]: 2024-11-12 20:49:30.632 [INFO][3866] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:30.637526 containerd[1455]: time="2024-11-12T20:49:30.636964167Z" level=info msg="TearDown network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" successfully" Nov 12 20:49:30.637526 containerd[1455]: time="2024-11-12T20:49:30.637273243Z" level=info msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" returns successfully" Nov 12 20:49:30.639592 kubelet[2601]: E1112 20:49:30.639104 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:30.647154 systemd[1]: run-netns-cni\x2dc47e19e5\x2d5353\x2d5c72\x2d0789\x2d9f5d048561a0.mount: Deactivated successfully. Nov 12 20:49:30.654990 containerd[1455]: time="2024-11-12T20:49:30.653787326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-92hnz,Uid:7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c,Namespace:kube-system,Attempt:1,}" Nov 12 20:49:30.806506 kubelet[2601]: E1112 20:49:30.806439 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:31.201318 containerd[1455]: time="2024-11-12T20:49:31.201246629Z" level=info msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" Nov 12 20:49:31.329181 systemd-networkd[1378]: cali299a36d1be6: Link UP Nov 12 20:49:31.329466 systemd-networkd[1378]: cali299a36d1be6: Gained carrier Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:30.947 [INFO][3944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.009 [INFO][3944] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0 coredns-76f75df574- kube-system 7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c 890 0 2024-11-12 20:48:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 coredns-76f75df574-92hnz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali299a36d1be6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.011 [INFO][3944] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.152 [INFO][3971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" HandleID="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.174 [INFO][3971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" HandleID="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319110), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"coredns-76f75df574-92hnz", "timestamp":"2024-11-12 20:49:31.152532568 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.175 [INFO][3971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.175 [INFO][3971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.176 [INFO][3971] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.181 [INFO][3971] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.194 [INFO][3971] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.208 [INFO][3971] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.216 [INFO][3971] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.225 [INFO][3971] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.225 [INFO][3971] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.253 [INFO][3971] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.270 [INFO][3971] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.284 [INFO][3971] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.1/26] block=192.168.42.0/26 handle="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.284 [INFO][3971] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.1/26] handle="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.285 [INFO][3971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:31.364111 containerd[1455]: 2024-11-12 20:49:31.285 [INFO][3971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.1/26] IPv6=[] ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" HandleID="k8s-pod-network.59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.289 [INFO][3944] cni-plugin/k8s.go 386: Populated endpoint ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"coredns-76f75df574-92hnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali299a36d1be6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.289 [INFO][3944] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.1/32] ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.289 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali299a36d1be6 ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.319 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.320 [INFO][3944] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a", Pod:"coredns-76f75df574-92hnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali299a36d1be6", MAC:"32:3c:b6:f1:be:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:31.373098 containerd[1455]: 2024-11-12 20:49:31.346 [INFO][3944] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a" Namespace="kube-system" Pod="coredns-76f75df574-92hnz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:31.508307 containerd[1455]: time="2024-11-12T20:49:31.507664941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:31.508307 containerd[1455]: time="2024-11-12T20:49:31.507952947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:31.510893 containerd[1455]: time="2024-11-12T20:49:31.507986472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:31.526104 containerd[1455]: time="2024-11-12T20:49:31.525926463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.461 [INFO][3993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.461 [INFO][3993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" iface="eth0" netns="/var/run/netns/cni-f0d5278b-bce3-99a8-9807-b94af38ad769" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.465 [INFO][3993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" iface="eth0" netns="/var/run/netns/cni-f0d5278b-bce3-99a8-9807-b94af38ad769" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.465 [INFO][3993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" iface="eth0" netns="/var/run/netns/cni-f0d5278b-bce3-99a8-9807-b94af38ad769" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.465 [INFO][3993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.465 [INFO][3993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.540 [INFO][4022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.544 [INFO][4022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.544 [INFO][4022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.588 [WARNING][4022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.588 [INFO][4022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.601 [INFO][4022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:31.628458 containerd[1455]: 2024-11-12 20:49:31.615 [INFO][3993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:31.629632 containerd[1455]: time="2024-11-12T20:49:31.628583697Z" level=info msg="TearDown network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" successfully" Nov 12 20:49:31.629632 containerd[1455]: time="2024-11-12T20:49:31.628626198Z" level=info msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" returns successfully" Nov 12 20:49:31.632086 containerd[1455]: time="2024-11-12T20:49:31.630205532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5947c459c4-p9zc7,Uid:ab892f57-dd73-4a88-b362-a0a22a8db051,Namespace:calico-system,Attempt:1,}" Nov 12 20:49:31.637352 systemd[1]: run-netns-cni\x2df0d5278b\x2dbce3\x2d99a8\x2d9807\x2db94af38ad769.mount: Deactivated successfully. Nov 12 20:49:31.706590 systemd[1]: Started cri-containerd-59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a.scope - libcontainer container 59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a. Nov 12 20:49:31.899003 kernel: bpftool[4081]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:49:31.920306 containerd[1455]: time="2024-11-12T20:49:31.920231669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-92hnz,Uid:7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c,Namespace:kube-system,Attempt:1,} returns sandbox id \"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a\"" Nov 12 20:49:31.922400 kubelet[2601]: E1112 20:49:31.921606 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:31.927905 containerd[1455]: time="2024-11-12T20:49:31.926237436Z" level=info msg="CreateContainer within sandbox \"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:31.968240 containerd[1455]: time="2024-11-12T20:49:31.967720682Z" level=info msg="CreateContainer within sandbox \"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"123a98b069b63e275ba732dcfccf808c316f868a99a71a414e4a527bdeccf80b\"" Nov 12 20:49:31.972475 containerd[1455]: time="2024-11-12T20:49:31.972211657Z" level=info msg="StartContainer for \"123a98b069b63e275ba732dcfccf808c316f868a99a71a414e4a527bdeccf80b\"" Nov 12 20:49:32.069002 systemd[1]: Started cri-containerd-123a98b069b63e275ba732dcfccf808c316f868a99a71a414e4a527bdeccf80b.scope - libcontainer container 123a98b069b63e275ba732dcfccf808c316f868a99a71a414e4a527bdeccf80b. Nov 12 20:49:32.160311 containerd[1455]: time="2024-11-12T20:49:32.160097043Z" level=info msg="StartContainer for \"123a98b069b63e275ba732dcfccf808c316f868a99a71a414e4a527bdeccf80b\" returns successfully" Nov 12 20:49:32.201090 containerd[1455]: time="2024-11-12T20:49:32.201026529Z" level=info msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" Nov 12 20:49:32.203396 containerd[1455]: time="2024-11-12T20:49:32.203339669Z" level=info msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" Nov 12 20:49:32.325217 systemd-networkd[1378]: cali396ba9372da: Link UP Nov 12 20:49:32.329551 systemd-networkd[1378]: cali396ba9372da: Gained carrier Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:31.861 [INFO][4062] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0 calico-kube-controllers-5947c459c4- calico-system ab892f57-dd73-4a88-b362-a0a22a8db051 905 0 2024-11-12 20:49:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5947c459c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 calico-kube-controllers-5947c459c4-p9zc7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali396ba9372da [] []}} ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:31.861 [INFO][4062] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.076 [INFO][4096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" HandleID="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.093 [INFO][4096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" HandleID="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"calico-kube-controllers-5947c459c4-p9zc7", "timestamp":"2024-11-12 20:49:32.076672256 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.094 [INFO][4096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.094 [INFO][4096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.094 [INFO][4096] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.104 [INFO][4096] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.122 [INFO][4096] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.147 [INFO][4096] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.181 [INFO][4096] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.188 [INFO][4096] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.188 [INFO][4096] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.192 [INFO][4096] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.208 [INFO][4096] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.276 [INFO][4096] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.2/26] block=192.168.42.0/26 handle="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.278 [INFO][4096] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.2/26] handle="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.278 [INFO][4096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:32.404941 containerd[1455]: 2024-11-12 20:49:32.279 [INFO][4096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.2/26] IPv6=[] ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" HandleID="k8s-pod-network.60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.296 [INFO][4062] cni-plugin/k8s.go 386: Populated endpoint ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0", GenerateName:"calico-kube-controllers-5947c459c4-", Namespace:"calico-system", SelfLink:"", UID:"ab892f57-dd73-4a88-b362-a0a22a8db051", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5947c459c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"calico-kube-controllers-5947c459c4-p9zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali396ba9372da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.296 [INFO][4062] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.2/32] ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.296 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali396ba9372da ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.334 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.349 [INFO][4062] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0", GenerateName:"calico-kube-controllers-5947c459c4-", Namespace:"calico-system", SelfLink:"", UID:"ab892f57-dd73-4a88-b362-a0a22a8db051", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5947c459c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f", Pod:"calico-kube-controllers-5947c459c4-p9zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali396ba9372da", MAC:"8a:ca:5b:58:92:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:32.407692 containerd[1455]: 2024-11-12 20:49:32.395 [INFO][4062] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f" Namespace="calico-system" Pod="calico-kube-controllers-5947c459c4-p9zc7" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:32.525519 containerd[1455]: time="2024-11-12T20:49:32.525121548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:32.525519 containerd[1455]: time="2024-11-12T20:49:32.525215876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:32.525519 containerd[1455]: time="2024-11-12T20:49:32.525233235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:32.525519 containerd[1455]: time="2024-11-12T20:49:32.525369220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:32.584201 systemd[1]: Started cri-containerd-60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f.scope - libcontainer container 60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f. Nov 12 20:49:32.822735 kubelet[2601]: E1112 20:49:32.820969 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.618 [INFO][4173] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.621 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" iface="eth0" netns="/var/run/netns/cni-2ae34160-6919-84c1-5b24-fbcc2c23ef20" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.621 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" iface="eth0" netns="/var/run/netns/cni-2ae34160-6919-84c1-5b24-fbcc2c23ef20" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.626 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" iface="eth0" netns="/var/run/netns/cni-2ae34160-6919-84c1-5b24-fbcc2c23ef20" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.627 [INFO][4173] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.627 [INFO][4173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.739 [INFO][4230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.742 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.742 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.832 [WARNING][4230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.832 [INFO][4230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.886 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:32.915903 containerd[1455]: 2024-11-12 20:49:32.910 [INFO][4173] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:32.920945 containerd[1455]: time="2024-11-12T20:49:32.918810778Z" level=info msg="TearDown network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" successfully" Nov 12 20:49:32.921132 containerd[1455]: time="2024-11-12T20:49:32.920951600Z" level=info msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" returns successfully" Nov 12 20:49:32.928956 systemd[1]: run-netns-cni\x2d2ae34160\x2d6919\x2d84c1\x2d5b24\x2dfbcc2c23ef20.mount: Deactivated successfully. Nov 12 20:49:32.949249 containerd[1455]: time="2024-11-12T20:49:32.949143197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8h2nz,Uid:cfc23e74-373d-4561-98be-87343fb2a0fb,Namespace:calico-system,Attempt:1,}" Nov 12 20:49:32.990205 containerd[1455]: time="2024-11-12T20:49:32.990023851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5947c459c4-p9zc7,Uid:ab892f57-dd73-4a88-b362-a0a22a8db051,Namespace:calico-system,Attempt:1,} returns sandbox id \"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f\"" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.719 [INFO][4182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.720 [INFO][4182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" iface="eth0" netns="/var/run/netns/cni-1615b3e2-40f0-9424-4c70-56d096b3408a" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.721 [INFO][4182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" iface="eth0" netns="/var/run/netns/cni-1615b3e2-40f0-9424-4c70-56d096b3408a" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.723 [INFO][4182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" iface="eth0" netns="/var/run/netns/cni-1615b3e2-40f0-9424-4c70-56d096b3408a" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.723 [INFO][4182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.723 [INFO][4182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.895 [INFO][4243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.895 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.895 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.958 [WARNING][4243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:32.958 [INFO][4243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:33.017 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:33.071914 containerd[1455]: 2024-11-12 20:49:33.056 [INFO][4182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:33.078531 kubelet[2601]: I1112 20:49:33.076625 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-92hnz" podStartSLOduration=41.076559878 podStartE2EDuration="41.076559878s" podCreationTimestamp="2024-11-12 20:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:32.941591781 +0000 UTC m=+54.042580744" watchObservedRunningTime="2024-11-12 20:49:33.076559878 +0000 UTC m=+54.177548843" Nov 12 20:49:33.081030 containerd[1455]: time="2024-11-12T20:49:33.080955007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:49:33.085151 systemd[1]: run-netns-cni\x2d1615b3e2\x2d40f0\x2d9424\x2d4c70\x2d56d096b3408a.mount: Deactivated successfully. Nov 12 20:49:33.088982 containerd[1455]: time="2024-11-12T20:49:33.085795390Z" level=info msg="TearDown network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" successfully" Nov 12 20:49:33.088982 containerd[1455]: time="2024-11-12T20:49:33.085884875Z" level=info msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" returns successfully" Nov 12 20:49:33.095714 containerd[1455]: time="2024-11-12T20:49:33.093495793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-zzw2n,Uid:90187418-37a7-49a8-afc5-d46af801e2a9,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:49:33.185778 systemd-networkd[1378]: cali299a36d1be6: Gained IPv6LL Nov 12 20:49:33.203588 containerd[1455]: time="2024-11-12T20:49:33.203201068Z" level=info msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" Nov 12 20:49:33.398801 systemd[1]: Started sshd@9-143.198.78.43:22-139.178.68.195:39356.service - OpenSSH per-connection server daemon (139.178.68.195:39356). Nov 12 20:49:33.688286 systemd-networkd[1378]: cali396ba9372da: Gained IPv6LL Nov 12 20:49:33.703005 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 39356 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:33.712554 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:33.723689 systemd-logind[1449]: New session 10 of user core. Nov 12 20:49:33.730317 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:49:33.966768 kubelet[2601]: E1112 20:49:33.960751 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.641 [INFO][4294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.650 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" iface="eth0" netns="/var/run/netns/cni-b3e9436b-f2e4-2d17-06b1-7a8bd4303e8e" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.650 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" iface="eth0" netns="/var/run/netns/cni-b3e9436b-f2e4-2d17-06b1-7a8bd4303e8e" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.651 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" iface="eth0" netns="/var/run/netns/cni-b3e9436b-f2e4-2d17-06b1-7a8bd4303e8e" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.652 [INFO][4294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:33.652 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.101 [INFO][4312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.104 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.104 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.135 [WARNING][4312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.136 [INFO][4312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.144 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:34.170028 containerd[1455]: 2024-11-12 20:49:34.158 [INFO][4294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:34.188665 systemd[1]: run-netns-cni\x2db3e9436b\x2df2e4\x2d2d17\x2d06b1\x2d7a8bd4303e8e.mount: Deactivated successfully. Nov 12 20:49:34.193722 containerd[1455]: time="2024-11-12T20:49:34.193220029Z" level=info msg="TearDown network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" successfully" Nov 12 20:49:34.193722 containerd[1455]: time="2024-11-12T20:49:34.193319174Z" level=info msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" returns successfully" Nov 12 20:49:34.205072 sshd[4299]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:34.212205 containerd[1455]: time="2024-11-12T20:49:34.212027192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-lpw7d,Uid:070859a9-c4e7-4be8-9722-144e7da7cafe,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:49:34.240155 containerd[1455]: time="2024-11-12T20:49:34.220249979Z" level=info msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" Nov 12 20:49:34.249888 systemd[1]: sshd@9-143.198.78.43:22-139.178.68.195:39356.service: Deactivated successfully. Nov 12 20:49:34.257311 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:49:34.271462 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:49:34.288956 systemd[1]: Started sshd@10-143.198.78.43:22-139.178.68.195:39368.service - OpenSSH per-connection server daemon (139.178.68.195:39368). Nov 12 20:49:34.294798 systemd-logind[1449]: Removed session 10. Nov 12 20:49:34.435656 sshd[4354]: Accepted publickey for core from 139.178.68.195 port 39368 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:34.438130 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:34.470158 systemd-logind[1449]: New session 11 of user core. Nov 12 20:49:34.476352 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:49:34.555903 systemd-networkd[1378]: calif194a732c12: Link UP Nov 12 20:49:34.567528 systemd-networkd[1378]: calif194a732c12: Gained carrier Nov 12 20:49:34.750426 systemd-networkd[1378]: cali24f0355b23f: Link UP Nov 12 20:49:34.754767 systemd-networkd[1378]: cali24f0355b23f: Gained carrier Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:33.575 [INFO][4267] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0 calico-apiserver-b9cd6c9fd- calico-apiserver 90187418-37a7-49a8-afc5-d46af801e2a9 928 0 2024-11-12 20:49:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b9cd6c9fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 calico-apiserver-b9cd6c9fd-zzw2n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif194a732c12 [] []}} ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:33.584 [INFO][4267] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.081 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" HandleID="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.162 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" HandleID="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"calico-apiserver-b9cd6c9fd-zzw2n", "timestamp":"2024-11-12 20:49:34.081342698 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.162 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.162 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.162 [INFO][4313] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.194 [INFO][4313] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.259 [INFO][4313] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.306 [INFO][4313] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.337 [INFO][4313] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.360 [INFO][4313] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.373 [INFO][4313] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.388 [INFO][4313] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7 Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.436 [INFO][4313] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.466 [INFO][4313] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.3/26] block=192.168.42.0/26 handle="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.466 [INFO][4313] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.3/26] handle="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.478 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:34.781265 containerd[1455]: 2024-11-12 20:49:34.479 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.3/26] IPv6=[] ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" HandleID="k8s-pod-network.0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.505 [INFO][4267] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"90187418-37a7-49a8-afc5-d46af801e2a9", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"calico-apiserver-b9cd6c9fd-zzw2n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif194a732c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.507 [INFO][4267] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.3/32] ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.507 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif194a732c12 ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.583 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.602 [INFO][4267] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"90187418-37a7-49a8-afc5-d46af801e2a9", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7", Pod:"calico-apiserver-b9cd6c9fd-zzw2n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif194a732c12", MAC:"76:d3:c2:98:ef:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:34.790761 containerd[1455]: 2024-11-12 20:49:34.734 [INFO][4267] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-zzw2n" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:33.645 [INFO][4259] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0 csi-node-driver- calico-system cfc23e74-373d-4561-98be-87343fb2a0fb 926 0 2024-11-12 20:49:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 csi-node-driver-8h2nz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali24f0355b23f [] []}} ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:33.645 [INFO][4259] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.119 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" HandleID="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.196 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" HandleID="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"csi-node-driver-8h2nz", "timestamp":"2024-11-12 20:49:34.119583202 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.199 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.466 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.466 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.475 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.510 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.541 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.558 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.579 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.585 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.600 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2 Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.635 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.692 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.4/26] block=192.168.42.0/26 handle="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.692 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.4/26] handle="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.694 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:34.837212 containerd[1455]: 2024-11-12 20:49:34.700 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.4/26] IPv6=[] ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" HandleID="k8s-pod-network.ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.743 [INFO][4259] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cfc23e74-373d-4561-98be-87343fb2a0fb", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"csi-node-driver-8h2nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24f0355b23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.745 [INFO][4259] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.4/32] ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.746 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24f0355b23f ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.750 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.768 [INFO][4259] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cfc23e74-373d-4561-98be-87343fb2a0fb", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2", Pod:"csi-node-driver-8h2nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24f0355b23f", MAC:"f2:d1:91:04:e6:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:34.857790 containerd[1455]: 2024-11-12 20:49:34.831 [INFO][4259] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2" Namespace="calico-system" Pod="csi-node-driver-8h2nz" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:34.974251 kubelet[2601]: E1112 20:49:34.973932 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:35.047375 sshd[4354]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:35.073882 containerd[1455]: time="2024-11-12T20:49:35.044472907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:35.073882 containerd[1455]: time="2024-11-12T20:49:35.047337723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:35.073882 containerd[1455]: time="2024-11-12T20:49:35.047361382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:35.073882 containerd[1455]: time="2024-11-12T20:49:35.047539350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:35.078777 systemd[1]: sshd@10-143.198.78.43:22-139.178.68.195:39368.service: Deactivated successfully. Nov 12 20:49:35.085564 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:49:35.098359 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:49:35.111503 systemd[1]: Started sshd@11-143.198.78.43:22-139.178.68.195:39372.service - OpenSSH per-connection server daemon (139.178.68.195:39372). Nov 12 20:49:35.129303 systemd-logind[1449]: Removed session 11. Nov 12 20:49:35.230088 sshd[4460]: Accepted publickey for core from 139.178.68.195 port 39372 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:35.224759 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:35.273634 systemd-logind[1449]: New session 12 of user core. Nov 12 20:49:35.295614 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:49:35.343909 systemd[1]: run-containerd-runc-k8s.io-0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7-runc.Av4aaP.mount: Deactivated successfully. Nov 12 20:49:35.404692 systemd[1]: Started cri-containerd-0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7.scope - libcontainer container 0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7. Nov 12 20:49:35.458908 containerd[1455]: time="2024-11-12T20:49:35.422400785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:35.458908 containerd[1455]: time="2024-11-12T20:49:35.422678751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:35.458908 containerd[1455]: time="2024-11-12T20:49:35.422713551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:35.458908 containerd[1455]: time="2024-11-12T20:49:35.422916304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:35.651128 systemd[1]: run-containerd-runc-k8s.io-ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2-runc.99VvgA.mount: Deactivated successfully. Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.967 [INFO][4360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.967 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" iface="eth0" netns="/var/run/netns/cni-5d14f62e-ef0a-f815-0749-dbc0244bff13" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.968 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" iface="eth0" netns="/var/run/netns/cni-5d14f62e-ef0a-f815-0749-dbc0244bff13" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.973 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" iface="eth0" netns="/var/run/netns/cni-5d14f62e-ef0a-f815-0749-dbc0244bff13" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.979 [INFO][4360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:34.979 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.443 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.443 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.443 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.557 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.557 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.609 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.658893 containerd[1455]: 2024-11-12 20:49:35.617 [INFO][4360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:35.667876 containerd[1455]: time="2024-11-12T20:49:35.660625105Z" level=info msg="TearDown network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" successfully" Nov 12 20:49:35.668426 systemd[1]: Started cri-containerd-ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2.scope - libcontainer container ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2. Nov 12 20:49:35.670111 containerd[1455]: time="2024-11-12T20:49:35.668971040Z" level=info msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" returns successfully" Nov 12 20:49:35.671204 kubelet[2601]: E1112 20:49:35.671157 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:35.677154 containerd[1455]: time="2024-11-12T20:49:35.677054413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s65ff,Uid:4008f795-fc1f-4445-8e95-1bef6a854734,Namespace:kube-system,Attempt:1,}" Nov 12 20:49:35.868808 sshd[4460]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:35.896031 systemd[1]: sshd@11-143.198.78.43:22-139.178.68.195:39372.service: Deactivated successfully. Nov 12 20:49:35.906362 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:49:35.923324 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:49:35.947275 systemd-logind[1449]: Removed session 12. Nov 12 20:49:36.029468 systemd-networkd[1378]: vxlan.calico: Link UP Nov 12 20:49:36.029486 systemd-networkd[1378]: vxlan.calico: Gained carrier Nov 12 20:49:36.049942 containerd[1455]: time="2024-11-12T20:49:36.045985312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8h2nz,Uid:cfc23e74-373d-4561-98be-87343fb2a0fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2\"" Nov 12 20:49:36.108414 systemd-networkd[1378]: calieb7e7df1e3b: Link UP Nov 12 20:49:36.110376 systemd-networkd[1378]: calieb7e7df1e3b: Gained carrier Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:34.926 [INFO][4361] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0 calico-apiserver-b9cd6c9fd- calico-apiserver 070859a9-c4e7-4be8-9722-144e7da7cafe 941 0 2024-11-12 20:49:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b9cd6c9fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 calico-apiserver-b9cd6c9fd-lpw7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb7e7df1e3b [] []}} ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:34.931 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.573 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" HandleID="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.621 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" HandleID="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"calico-apiserver-b9cd6c9fd-lpw7d", "timestamp":"2024-11-12 20:49:35.57298977 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.622 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.623 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.623 [INFO][4441] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.647 [INFO][4441] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.680 [INFO][4441] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.777 [INFO][4441] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.802 [INFO][4441] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.817 [INFO][4441] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.817 [INFO][4441] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.854 [INFO][4441] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307 Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:35.931 [INFO][4441] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:36.001 [INFO][4441] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.5/26] block=192.168.42.0/26 handle="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:36.001 [INFO][4441] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.5/26] handle="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:36.001 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.180986 containerd[1455]: 2024-11-12 20:49:36.001 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.5/26] IPv6=[] ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" HandleID="k8s-pod-network.9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.073 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"070859a9-c4e7-4be8-9722-144e7da7cafe", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"calico-apiserver-b9cd6c9fd-lpw7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7e7df1e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.073 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.5/32] ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.099 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb7e7df1e3b ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.109 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.110 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"070859a9-c4e7-4be8-9722-144e7da7cafe", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307", Pod:"calico-apiserver-b9cd6c9fd-lpw7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7e7df1e3b", MAC:"76:a5:a7:cf:45:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.186628 containerd[1455]: 2024-11-12 20:49:36.155 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307" Namespace="calico-apiserver" Pod="calico-apiserver-b9cd6c9fd-lpw7d" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:36.184293 systemd-networkd[1378]: calif194a732c12: Gained IPv6LL Nov 12 20:49:36.361542 systemd[1]: run-netns-cni\x2d5d14f62e\x2def0a\x2df815\x2d0749\x2ddbc0244bff13.mount: Deactivated successfully. Nov 12 20:49:36.376650 systemd-networkd[1378]: cali24f0355b23f: Gained IPv6LL Nov 12 20:49:36.415261 containerd[1455]: time="2024-11-12T20:49:36.414624335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-zzw2n,Uid:90187418-37a7-49a8-afc5-d46af801e2a9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7\"" Nov 12 20:49:36.479677 containerd[1455]: time="2024-11-12T20:49:36.479526674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:36.484228 containerd[1455]: time="2024-11-12T20:49:36.483699040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:36.484228 containerd[1455]: time="2024-11-12T20:49:36.483787707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:36.484228 containerd[1455]: time="2024-11-12T20:49:36.484060256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:36.626679 systemd[1]: Started cri-containerd-9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307.scope - libcontainer container 9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307. Nov 12 20:49:36.833519 systemd-networkd[1378]: cali8c36688879b: Link UP Nov 12 20:49:36.833957 systemd-networkd[1378]: cali8c36688879b: Gained carrier Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.240 [INFO][4510] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0 coredns-76f75df574- kube-system 4008f795-fc1f-4445-8e95-1bef6a854734 954 0 2024-11-12 20:48:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-d-ef96bd2a01 coredns-76f75df574-s65ff eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c36688879b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.240 [INFO][4510] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.642 [INFO][4565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" HandleID="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.684 [INFO][4565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" HandleID="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b40d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-d-ef96bd2a01", "pod":"coredns-76f75df574-s65ff", "timestamp":"2024-11-12 20:49:36.64226763 +0000 UTC"}, Hostname:"ci-4081.2.0-d-ef96bd2a01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.686 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.686 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.686 [INFO][4565] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-d-ef96bd2a01' Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.695 [INFO][4565] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.715 [INFO][4565] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.744 [INFO][4565] ipam/ipam.go 489: Trying affinity for 192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.751 [INFO][4565] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.759 [INFO][4565] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.759 [INFO][4565] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.767 [INFO][4565] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.793 [INFO][4565] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.808 [INFO][4565] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.6/26] block=192.168.42.0/26 handle="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.808 [INFO][4565] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.6/26] handle="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" host="ci-4081.2.0-d-ef96bd2a01" Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.808 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.883433 containerd[1455]: 2024-11-12 20:49:36.808 [INFO][4565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.6/26] IPv6=[] ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" HandleID="k8s-pod-network.ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.820 [INFO][4510] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4008f795-fc1f-4445-8e95-1bef6a854734", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"", Pod:"coredns-76f75df574-s65ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c36688879b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.821 [INFO][4510] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.6/32] ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.821 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c36688879b ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.835 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.836 [INFO][4510] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4008f795-fc1f-4445-8e95-1bef6a854734", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f", Pod:"coredns-76f75df574-s65ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c36688879b", MAC:"b2:6c:4c:16:18:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.889680 containerd[1455]: 2024-11-12 20:49:36.869 [INFO][4510] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f" Namespace="kube-system" Pod="coredns-76f75df574-s65ff" WorkloadEndpoint="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:37.084881 containerd[1455]: time="2024-11-12T20:49:37.083682750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:37.092075 containerd[1455]: time="2024-11-12T20:49:37.084459117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:37.092075 containerd[1455]: time="2024-11-12T20:49:37.084762482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:37.110271 containerd[1455]: time="2024-11-12T20:49:37.108297334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:37.164785 containerd[1455]: time="2024-11-12T20:49:37.164725367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9cd6c9fd-lpw7d,Uid:070859a9-c4e7-4be8-9722-144e7da7cafe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307\"" Nov 12 20:49:37.306710 systemd[1]: Started cri-containerd-ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f.scope - libcontainer container ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f. Nov 12 20:49:37.400321 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Nov 12 20:49:37.568966 containerd[1455]: time="2024-11-12T20:49:37.568227517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s65ff,Uid:4008f795-fc1f-4445-8e95-1bef6a854734,Namespace:kube-system,Attempt:1,} returns sandbox id \"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f\"" Nov 12 20:49:37.579694 kubelet[2601]: E1112 20:49:37.579190 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:37.592860 containerd[1455]: time="2024-11-12T20:49:37.592376893Z" level=info msg="CreateContainer within sandbox \"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:37.592920 systemd-networkd[1378]: calieb7e7df1e3b: Gained IPv6LL Nov 12 20:49:37.649463 containerd[1455]: time="2024-11-12T20:49:37.649218682Z" level=info msg="CreateContainer within sandbox \"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ec40e87e35930d6fd89cb7779a318acf011c9f2b0d9f2088d42ae4cfcb23b69\"" Nov 12 20:49:37.670661 containerd[1455]: time="2024-11-12T20:49:37.670444810Z" level=info msg="StartContainer for \"1ec40e87e35930d6fd89cb7779a318acf011c9f2b0d9f2088d42ae4cfcb23b69\"" Nov 12 20:49:37.815270 systemd[1]: Started cri-containerd-1ec40e87e35930d6fd89cb7779a318acf011c9f2b0d9f2088d42ae4cfcb23b69.scope - libcontainer container 1ec40e87e35930d6fd89cb7779a318acf011c9f2b0d9f2088d42ae4cfcb23b69. Nov 12 20:49:37.988936 containerd[1455]: time="2024-11-12T20:49:37.988407813Z" level=info msg="StartContainer for \"1ec40e87e35930d6fd89cb7779a318acf011c9f2b0d9f2088d42ae4cfcb23b69\" returns successfully" Nov 12 20:49:38.056967 kubelet[2601]: E1112 20:49:38.056713 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:38.172953 kubelet[2601]: I1112 20:49:38.158569 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s65ff" podStartSLOduration=46.158492852 podStartE2EDuration="46.158492852s" podCreationTimestamp="2024-11-12 20:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:38.155957527 +0000 UTC m=+59.256946514" watchObservedRunningTime="2024-11-12 20:49:38.158492852 +0000 UTC m=+59.259481827" Nov 12 20:49:38.486760 systemd-networkd[1378]: cali8c36688879b: Gained IPv6LL Nov 12 20:49:39.092689 kubelet[2601]: E1112 20:49:39.092554 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:39.288172 containerd[1455]: time="2024-11-12T20:49:39.285834133Z" level=info msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.624 [WARNING][4768] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"90187418-37a7-49a8-afc5-d46af801e2a9", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7", Pod:"calico-apiserver-b9cd6c9fd-zzw2n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif194a732c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.626 [INFO][4768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.626 [INFO][4768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" iface="eth0" netns="" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.626 [INFO][4768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.626 [INFO][4768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.704 [INFO][4777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.706 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.706 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.731 [WARNING][4777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.732 [INFO][4777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.738 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:39.762752 containerd[1455]: 2024-11-12 20:49:39.745 [INFO][4768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:39.762752 containerd[1455]: time="2024-11-12T20:49:39.762215062Z" level=info msg="TearDown network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" successfully" Nov 12 20:49:39.762752 containerd[1455]: time="2024-11-12T20:49:39.762255944Z" level=info msg="StopPodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" returns successfully" Nov 12 20:49:39.770167 containerd[1455]: time="2024-11-12T20:49:39.768815279Z" level=info msg="RemovePodSandbox for \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" Nov 12 20:49:39.770167 containerd[1455]: time="2024-11-12T20:49:39.768906317Z" level=info msg="Forcibly stopping sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\"" Nov 12 20:49:40.088571 kubelet[2601]: E1112 20:49:40.087050 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:39.941 [WARNING][4799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"90187418-37a7-49a8-afc5-d46af801e2a9", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7", Pod:"calico-apiserver-b9cd6c9fd-zzw2n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif194a732c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:39.942 [INFO][4799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:39.942 [INFO][4799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" iface="eth0" netns="" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:39.942 [INFO][4799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:39.942 [INFO][4799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.039 [INFO][4806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.041 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.041 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.073 [WARNING][4806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.073 [INFO][4806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" HandleID="k8s-pod-network.18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--zzw2n-eth0" Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.083 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:40.114947 containerd[1455]: 2024-11-12 20:49:40.097 [INFO][4799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b" Nov 12 20:49:40.114947 containerd[1455]: time="2024-11-12T20:49:40.114787910Z" level=info msg="TearDown network for sandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" successfully" Nov 12 20:49:40.143070 containerd[1455]: time="2024-11-12T20:49:40.142874300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:40.143248 containerd[1455]: time="2024-11-12T20:49:40.143083802Z" level=info msg="RemovePodSandbox \"18565c78a00ecc5cbde4b1c4450ca6e4e958d7c57a567c8c0e782b7089ee879b\" returns successfully" Nov 12 20:49:40.148809 containerd[1455]: time="2024-11-12T20:49:40.148744210Z" level=info msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" Nov 12 20:49:40.162912 containerd[1455]: time="2024-11-12T20:49:40.162279773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:49:40.163708 containerd[1455]: time="2024-11-12T20:49:40.163642084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:40.166900 containerd[1455]: time="2024-11-12T20:49:40.165802583Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:40.168197 containerd[1455]: time="2024-11-12T20:49:40.168135636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:40.169344 containerd[1455]: time="2024-11-12T20:49:40.169279408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 7.088184684s" Nov 12 20:49:40.169675 containerd[1455]: time="2024-11-12T20:49:40.169619128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:49:40.171892 containerd[1455]: time="2024-11-12T20:49:40.171801334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:49:40.219684 containerd[1455]: time="2024-11-12T20:49:40.219590734Z" level=info msg="CreateContainer within sandbox \"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:49:40.263414 containerd[1455]: time="2024-11-12T20:49:40.263354904Z" level=info msg="CreateContainer within sandbox \"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1\"" Nov 12 20:49:40.264794 containerd[1455]: time="2024-11-12T20:49:40.264685461Z" level=info msg="StartContainer for \"6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1\"" Nov 12 20:49:40.395679 systemd[1]: Started cri-containerd-6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1.scope - libcontainer container 6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1. Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.403 [WARNING][4826] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cfc23e74-373d-4561-98be-87343fb2a0fb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2", Pod:"csi-node-driver-8h2nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24f0355b23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.404 [INFO][4826] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.404 [INFO][4826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" iface="eth0" netns="" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.404 [INFO][4826] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.404 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.480 [INFO][4851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.480 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.480 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.495 [WARNING][4851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.496 [INFO][4851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.504 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:40.511149 containerd[1455]: 2024-11-12 20:49:40.507 [INFO][4826] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.514595 containerd[1455]: time="2024-11-12T20:49:40.511180562Z" level=info msg="TearDown network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" successfully" Nov 12 20:49:40.514595 containerd[1455]: time="2024-11-12T20:49:40.511235213Z" level=info msg="StopPodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" returns successfully" Nov 12 20:49:40.515111 containerd[1455]: time="2024-11-12T20:49:40.514700984Z" level=info msg="RemovePodSandbox for \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" Nov 12 20:49:40.515111 containerd[1455]: time="2024-11-12T20:49:40.514756295Z" level=info msg="Forcibly stopping sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\"" Nov 12 20:49:40.602579 containerd[1455]: time="2024-11-12T20:49:40.602371866Z" level=info msg="StartContainer for \"6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1\" returns successfully" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.633 [WARNING][4875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cfc23e74-373d-4561-98be-87343fb2a0fb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2", Pod:"csi-node-driver-8h2nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24f0355b23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.633 [INFO][4875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.634 [INFO][4875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" iface="eth0" netns="" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.634 [INFO][4875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.634 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.719 [INFO][4892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.719 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.719 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.743 [WARNING][4892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.743 [INFO][4892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" HandleID="k8s-pod-network.592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-csi--node--driver--8h2nz-eth0" Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.754 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:40.767071 containerd[1455]: 2024-11-12 20:49:40.761 [INFO][4875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034" Nov 12 20:49:40.767865 containerd[1455]: time="2024-11-12T20:49:40.767118417Z" level=info msg="TearDown network for sandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" successfully" Nov 12 20:49:40.787004 containerd[1455]: time="2024-11-12T20:49:40.786910347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:40.787004 containerd[1455]: time="2024-11-12T20:49:40.787000654Z" level=info msg="RemovePodSandbox \"592b0acc3c064d13106ce0d9dca50c26373a0f3a83669751102b343a04ffc034\" returns successfully" Nov 12 20:49:40.821693 containerd[1455]: time="2024-11-12T20:49:40.789185104Z" level=info msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" Nov 12 20:49:40.889503 systemd[1]: Started sshd@12-143.198.78.43:22-139.178.68.195:51602.service - OpenSSH per-connection server daemon (139.178.68.195:51602). Nov 12 20:49:41.121737 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 51602 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:41.138647 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:41.190559 systemd[1]: run-containerd-runc-k8s.io-6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1-runc.qwiebq.mount: Deactivated successfully. Nov 12 20:49:41.206104 systemd-logind[1449]: New session 13 of user core. Nov 12 20:49:41.208177 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.042 [WARNING][4910] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4008f795-fc1f-4445-8e95-1bef6a854734", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f", Pod:"coredns-76f75df574-s65ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c36688879b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.042 [INFO][4910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.042 [INFO][4910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" iface="eth0" netns="" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.043 [INFO][4910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.043 [INFO][4910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.335 [INFO][4919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.335 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.335 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.409 [WARNING][4919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.410 [INFO][4919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.436 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:41.457609 containerd[1455]: 2024-11-12 20:49:41.450 [INFO][4910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.458601 containerd[1455]: time="2024-11-12T20:49:41.457659839Z" level=info msg="TearDown network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" successfully" Nov 12 20:49:41.458601 containerd[1455]: time="2024-11-12T20:49:41.457711970Z" level=info msg="StopPodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" returns successfully" Nov 12 20:49:41.458601 containerd[1455]: time="2024-11-12T20:49:41.458474015Z" level=info msg="RemovePodSandbox for \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" Nov 12 20:49:41.458601 containerd[1455]: time="2024-11-12T20:49:41.458555519Z" level=info msg="Forcibly stopping sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\"" Nov 12 20:49:41.817833 sshd[4915]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:41.844476 systemd[1]: sshd@12-143.198.78.43:22-139.178.68.195:51602.service: Deactivated successfully. Nov 12 20:49:41.853378 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:49:41.857385 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:49:41.867795 systemd-logind[1449]: Removed session 13. Nov 12 20:49:41.897343 kubelet[2601]: I1112 20:49:41.896870 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5947c459c4-p9zc7" podStartSLOduration=32.801751282 podStartE2EDuration="39.896726325s" podCreationTimestamp="2024-11-12 20:49:02 +0000 UTC" firstStartedPulling="2024-11-12 20:49:33.076008632 +0000 UTC m=+54.176997583" lastFinishedPulling="2024-11-12 20:49:40.170983671 +0000 UTC m=+61.271972626" observedRunningTime="2024-11-12 20:49:41.326382472 +0000 UTC m=+62.427371433" watchObservedRunningTime="2024-11-12 20:49:41.896726325 +0000 UTC m=+62.997715291" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.664 [WARNING][4960] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4008f795-fc1f-4445-8e95-1bef6a854734", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"ab2150942c1d8905ad0f82c0a04529ab086566a546d2eb7c84416a1e67c1010f", Pod:"coredns-76f75df574-s65ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c36688879b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.667 [INFO][4960] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.667 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" iface="eth0" netns="" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.668 [INFO][4960] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.668 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.824 [INFO][4966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.824 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.824 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.871 [WARNING][4966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.875 [INFO][4966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" HandleID="k8s-pod-network.38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--s65ff-eth0" Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.885 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:41.905211 containerd[1455]: 2024-11-12 20:49:41.891 [INFO][4960] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096" Nov 12 20:49:41.905211 containerd[1455]: time="2024-11-12T20:49:41.903433171Z" level=info msg="TearDown network for sandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" successfully" Nov 12 20:49:41.917616 containerd[1455]: time="2024-11-12T20:49:41.916270257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:41.917616 containerd[1455]: time="2024-11-12T20:49:41.916402622Z" level=info msg="RemovePodSandbox \"38b29a3827238bbaa9f16d71d4c75838da3e90ec9d5805a86df6dd2b265b0096\" returns successfully" Nov 12 20:49:41.919485 containerd[1455]: time="2024-11-12T20:49:41.918974199Z" level=info msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.023 [WARNING][4993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0", GenerateName:"calico-kube-controllers-5947c459c4-", Namespace:"calico-system", SelfLink:"", UID:"ab892f57-dd73-4a88-b362-a0a22a8db051", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5947c459c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f", Pod:"calico-kube-controllers-5947c459c4-p9zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali396ba9372da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.024 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.024 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" iface="eth0" netns="" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.024 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.024 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.074 [INFO][4999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.075 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.075 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.090 [WARNING][4999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.090 [INFO][4999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.097 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:42.103353 containerd[1455]: 2024-11-12 20:49:42.099 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.103353 containerd[1455]: time="2024-11-12T20:49:42.102303941Z" level=info msg="TearDown network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" successfully" Nov 12 20:49:42.103353 containerd[1455]: time="2024-11-12T20:49:42.102343802Z" level=info msg="StopPodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" returns successfully" Nov 12 20:49:42.108768 containerd[1455]: time="2024-11-12T20:49:42.104662243Z" level=info msg="RemovePodSandbox for \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" Nov 12 20:49:42.108768 containerd[1455]: time="2024-11-12T20:49:42.105172519Z" level=info msg="Forcibly stopping sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\"" Nov 12 20:49:42.268597 containerd[1455]: time="2024-11-12T20:49:42.266978876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:42.274703 containerd[1455]: time="2024-11-12T20:49:42.274617271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:49:42.277088 containerd[1455]: time="2024-11-12T20:49:42.276921823Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:42.284387 containerd[1455]: time="2024-11-12T20:49:42.284325467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:42.285912 containerd[1455]: time="2024-11-12T20:49:42.285796330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.113901274s" Nov 12 20:49:42.286323 containerd[1455]: time="2024-11-12T20:49:42.286126139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:49:42.289639 containerd[1455]: time="2024-11-12T20:49:42.289218063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:49:42.297729 containerd[1455]: time="2024-11-12T20:49:42.297100714Z" level=info msg="CreateContainer within sandbox \"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:49:42.336698 containerd[1455]: time="2024-11-12T20:49:42.336419877Z" level=info msg="CreateContainer within sandbox \"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1\"" Nov 12 20:49:42.339108 containerd[1455]: time="2024-11-12T20:49:42.338791409Z" level=info msg="StartContainer for \"5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1\"" Nov 12 20:49:42.417366 systemd[1]: run-containerd-runc-k8s.io-5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1-runc.V9vb63.mount: Deactivated successfully. Nov 12 20:49:42.433904 systemd[1]: Started cri-containerd-5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1.scope - libcontainer container 5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1. Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.308 [WARNING][5018] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0", GenerateName:"calico-kube-controllers-5947c459c4-", Namespace:"calico-system", SelfLink:"", UID:"ab892f57-dd73-4a88-b362-a0a22a8db051", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5947c459c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"60ba0a3fd62ad634900f09f6ca7528ac51f75bb6771492fc5215cf82676c7d5f", Pod:"calico-kube-controllers-5947c459c4-p9zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali396ba9372da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.309 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.309 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" iface="eth0" netns="" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.309 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.309 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.431 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.432 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.432 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.445 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.445 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" HandleID="k8s-pod-network.120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--kube--controllers--5947c459c4--p9zc7-eth0" Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.453 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:42.461833 containerd[1455]: 2024-11-12 20:49:42.456 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a" Nov 12 20:49:42.464565 containerd[1455]: time="2024-11-12T20:49:42.462999728Z" level=info msg="TearDown network for sandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" successfully" Nov 12 20:49:42.472430 containerd[1455]: time="2024-11-12T20:49:42.472313649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:42.473403 containerd[1455]: time="2024-11-12T20:49:42.472499387Z" level=info msg="RemovePodSandbox \"120481e914fe8de9fb5ec899eda9552ba5835e1868e3a84bd7708e7d5fd0af0a\" returns successfully" Nov 12 20:49:42.474271 containerd[1455]: time="2024-11-12T20:49:42.474220133Z" level=info msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" Nov 12 20:49:42.519193 containerd[1455]: time="2024-11-12T20:49:42.519041010Z" level=info msg="StartContainer for \"5ca874ca44939891b88f20516c79a2d2698e34967205746be3f779f45e63b3c1\" returns successfully" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.566 [WARNING][5067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"070859a9-c4e7-4be8-9722-144e7da7cafe", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307", Pod:"calico-apiserver-b9cd6c9fd-lpw7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7e7df1e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.566 [INFO][5067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.566 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" iface="eth0" netns="" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.566 [INFO][5067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.566 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.599 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.600 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.600 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.609 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.609 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.614 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:42.623613 containerd[1455]: 2024-11-12 20:49:42.617 [INFO][5067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.626946 containerd[1455]: time="2024-11-12T20:49:42.623636471Z" level=info msg="TearDown network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" successfully" Nov 12 20:49:42.626946 containerd[1455]: time="2024-11-12T20:49:42.623677153Z" level=info msg="StopPodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" returns successfully" Nov 12 20:49:42.626946 containerd[1455]: time="2024-11-12T20:49:42.624575876Z" level=info msg="RemovePodSandbox for \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" Nov 12 20:49:42.626946 containerd[1455]: time="2024-11-12T20:49:42.624626635Z" level=info msg="Forcibly stopping sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\"" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.708 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0", GenerateName:"calico-apiserver-b9cd6c9fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"070859a9-c4e7-4be8-9722-144e7da7cafe", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9cd6c9fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307", Pod:"calico-apiserver-b9cd6c9fd-lpw7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7e7df1e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.708 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.708 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" iface="eth0" netns="" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.708 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.708 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.758 [INFO][5107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.758 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.758 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.775 [WARNING][5107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.775 [INFO][5107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" HandleID="k8s-pod-network.739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-calico--apiserver--b9cd6c9fd--lpw7d-eth0" Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.779 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:42.784923 containerd[1455]: 2024-11-12 20:49:42.780 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce" Nov 12 20:49:42.784923 containerd[1455]: time="2024-11-12T20:49:42.783363653Z" level=info msg="TearDown network for sandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" successfully" Nov 12 20:49:42.792243 containerd[1455]: time="2024-11-12T20:49:42.792156441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:42.793706 containerd[1455]: time="2024-11-12T20:49:42.792290793Z" level=info msg="RemovePodSandbox \"739100e6882c08095dd3071d7107a010544d1bac4ae85de7e8d7eebc58f13fce\" returns successfully" Nov 12 20:49:42.793706 containerd[1455]: time="2024-11-12T20:49:42.793102728Z" level=info msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.858 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a", Pod:"coredns-76f75df574-92hnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali299a36d1be6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.860 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.860 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" iface="eth0" netns="" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.860 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.860 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.915 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.915 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.915 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.928 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.928 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.931 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:42.937953 containerd[1455]: 2024-11-12 20:49:42.934 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:42.940146 containerd[1455]: time="2024-11-12T20:49:42.938814596Z" level=info msg="TearDown network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" successfully" Nov 12 20:49:42.940146 containerd[1455]: time="2024-11-12T20:49:42.938938887Z" level=info msg="StopPodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" returns successfully" Nov 12 20:49:42.942444 containerd[1455]: time="2024-11-12T20:49:42.941362380Z" level=info msg="RemovePodSandbox for \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" Nov 12 20:49:42.942444 containerd[1455]: time="2024-11-12T20:49:42.941540658Z" level=info msg="Forcibly stopping sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\"" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.024 [WARNING][5149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7fbc16cd-09b0-457a-a2a1-8f4ff6bb628c", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-d-ef96bd2a01", ContainerID:"59ab72a05214f241f854674b9adf29226a47d51d0dbac53808a539a47a9aa22a", Pod:"coredns-76f75df574-92hnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali299a36d1be6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.025 [INFO][5149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.025 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" iface="eth0" netns="" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.025 [INFO][5149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.025 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.061 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.061 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.061 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.074 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.074 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" HandleID="k8s-pod-network.eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Workload="ci--4081.2.0--d--ef96bd2a01-k8s-coredns--76f75df574--92hnz-eth0" Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.078 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:43.084463 containerd[1455]: 2024-11-12 20:49:43.080 [INFO][5149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950" Nov 12 20:49:43.086737 containerd[1455]: time="2024-11-12T20:49:43.085076897Z" level=info msg="TearDown network for sandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" successfully" Nov 12 20:49:43.094735 containerd[1455]: time="2024-11-12T20:49:43.094570400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:43.094735 containerd[1455]: time="2024-11-12T20:49:43.094710046Z" level=info msg="RemovePodSandbox \"eb83f4567b41dc7058ad06accf23fedfa5f69c486d6bce41e0aadba3decfa950\" returns successfully" Nov 12 20:49:45.401237 containerd[1455]: time="2024-11-12T20:49:45.401147357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:45.405039 containerd[1455]: time="2024-11-12T20:49:45.404169132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:49:45.406256 containerd[1455]: time="2024-11-12T20:49:45.406179118Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:45.414286 containerd[1455]: time="2024-11-12T20:49:45.413582307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:45.421872 containerd[1455]: time="2024-11-12T20:49:45.416167245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.124849446s" Nov 12 20:49:45.421872 containerd[1455]: time="2024-11-12T20:49:45.416260057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:49:45.421872 containerd[1455]: time="2024-11-12T20:49:45.420669596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:49:45.426603 containerd[1455]: time="2024-11-12T20:49:45.426521388Z" level=info msg="CreateContainer within sandbox \"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:49:45.460103 containerd[1455]: time="2024-11-12T20:49:45.460004994Z" level=info msg="CreateContainer within sandbox \"0f7200a4cb5b075c67e2cf177665db35838ed226ce7f35086c9c310dfb660ab7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d\"" Nov 12 20:49:45.466261 containerd[1455]: time="2024-11-12T20:49:45.464427431Z" level=info msg="StartContainer for \"d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d\"" Nov 12 20:49:45.536350 systemd[1]: run-containerd-runc-k8s.io-d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d-runc.CaD4G0.mount: Deactivated successfully. Nov 12 20:49:45.551485 systemd[1]: Started cri-containerd-d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d.scope - libcontainer container d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d. Nov 12 20:49:45.635775 containerd[1455]: time="2024-11-12T20:49:45.635681481Z" level=info msg="StartContainer for \"d67793d0f12a967763d6caf43077724d8870bf1382169141a294f18ff35ce56d\" returns successfully" Nov 12 20:49:45.844669 containerd[1455]: time="2024-11-12T20:49:45.844587576Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:45.848891 containerd[1455]: time="2024-11-12T20:49:45.846100298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:49:45.850654 containerd[1455]: time="2024-11-12T20:49:45.850575684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 429.842186ms" Nov 12 20:49:45.851129 containerd[1455]: time="2024-11-12T20:49:45.850934000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:49:45.855228 containerd[1455]: time="2024-11-12T20:49:45.855153973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:49:45.860759 containerd[1455]: time="2024-11-12T20:49:45.860468939Z" level=info msg="CreateContainer within sandbox \"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:49:45.908774 containerd[1455]: time="2024-11-12T20:49:45.908533735Z" level=info msg="CreateContainer within sandbox \"9fcbe827f180cc122e1d433951f6a844c0df1dabed6e059a35645bb56def8307\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e9ec5609f6957faa977b426df2649aec18d98d683b90a9ee784d66feb2e9d83b\"" Nov 12 20:49:45.911900 containerd[1455]: time="2024-11-12T20:49:45.910140131Z" level=info msg="StartContainer for \"e9ec5609f6957faa977b426df2649aec18d98d683b90a9ee784d66feb2e9d83b\"" Nov 12 20:49:45.983444 systemd[1]: Started cri-containerd-e9ec5609f6957faa977b426df2649aec18d98d683b90a9ee784d66feb2e9d83b.scope - libcontainer container e9ec5609f6957faa977b426df2649aec18d98d683b90a9ee784d66feb2e9d83b. Nov 12 20:49:46.077981 containerd[1455]: time="2024-11-12T20:49:46.077890114Z" level=info msg="StartContainer for \"e9ec5609f6957faa977b426df2649aec18d98d683b90a9ee784d66feb2e9d83b\" returns successfully" Nov 12 20:49:46.298784 kubelet[2601]: I1112 20:49:46.298708 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-zzw2n" podStartSLOduration=36.31068183 podStartE2EDuration="45.298588547s" podCreationTimestamp="2024-11-12 20:49:01 +0000 UTC" firstStartedPulling="2024-11-12 20:49:36.428893448 +0000 UTC m=+57.529882387" lastFinishedPulling="2024-11-12 20:49:45.416800133 +0000 UTC m=+66.517789104" observedRunningTime="2024-11-12 20:49:46.295360228 +0000 UTC m=+67.396349181" watchObservedRunningTime="2024-11-12 20:49:46.298588547 +0000 UTC m=+67.399577514" Nov 12 20:49:46.838064 systemd[1]: Started sshd@13-143.198.78.43:22-139.178.68.195:52400.service - OpenSSH per-connection server daemon (139.178.68.195:52400). Nov 12 20:49:46.978682 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 52400 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:46.981163 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:47.003694 systemd-logind[1449]: New session 14 of user core. Nov 12 20:49:47.007194 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:49:47.351680 kubelet[2601]: I1112 20:49:47.348932 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:47.369585 kubelet[2601]: I1112 20:49:47.365699 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:47.429715 systemd[1]: run-containerd-runc-k8s.io-6e142e4ebc0fc538a3922cfd3fb3b873adf0a8530dd226371efd36354853c2d1-runc.xF00y1.mount: Deactivated successfully. Nov 12 20:49:47.713982 sshd[5259]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:47.724500 systemd[1]: sshd@13-143.198.78.43:22-139.178.68.195:52400.service: Deactivated successfully. Nov 12 20:49:47.731828 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:49:47.737424 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:49:47.741375 systemd-logind[1449]: Removed session 14. Nov 12 20:49:48.062909 containerd[1455]: time="2024-11-12T20:49:48.062645594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:48.064717 containerd[1455]: time="2024-11-12T20:49:48.064613468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:49:48.069659 containerd[1455]: time="2024-11-12T20:49:48.069544030Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:48.077774 containerd[1455]: time="2024-11-12T20:49:48.076438413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:48.077774 containerd[1455]: time="2024-11-12T20:49:48.077570242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.222346984s" Nov 12 20:49:48.077774 containerd[1455]: time="2024-11-12T20:49:48.077627131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:49:48.082158 containerd[1455]: time="2024-11-12T20:49:48.081760395Z" level=info msg="CreateContainer within sandbox \"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:49:48.121700 containerd[1455]: time="2024-11-12T20:49:48.121613189Z" level=info msg="CreateContainer within sandbox \"ca3b1d15c4a7577594d1beece1436a59bcebc4db7f569e08f174e93f3bbeb4d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d14646f22982c457eb4714ed663e33d282746a5c4eb5f4bb6ec03e30cc463f65\"" Nov 12 20:49:48.124467 containerd[1455]: time="2024-11-12T20:49:48.124028002Z" level=info msg="StartContainer for \"d14646f22982c457eb4714ed663e33d282746a5c4eb5f4bb6ec03e30cc463f65\"" Nov 12 20:49:48.219528 systemd[1]: Started cri-containerd-d14646f22982c457eb4714ed663e33d282746a5c4eb5f4bb6ec03e30cc463f65.scope - libcontainer container d14646f22982c457eb4714ed663e33d282746a5c4eb5f4bb6ec03e30cc463f65. Nov 12 20:49:48.310564 kubelet[2601]: I1112 20:49:48.310517 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:48.328212 containerd[1455]: time="2024-11-12T20:49:48.328016574Z" level=info msg="StartContainer for \"d14646f22982c457eb4714ed663e33d282746a5c4eb5f4bb6ec03e30cc463f65\" returns successfully" Nov 12 20:49:48.623151 kubelet[2601]: I1112 20:49:48.622982 2601 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:49:48.634664 kubelet[2601]: I1112 20:49:48.634587 2601 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:49:48.785563 kubelet[2601]: I1112 20:49:48.785495 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b9cd6c9fd-lpw7d" podStartSLOduration=39.10364843 podStartE2EDuration="47.785418501s" podCreationTimestamp="2024-11-12 20:49:01 +0000 UTC" firstStartedPulling="2024-11-12 20:49:37.170190653 +0000 UTC m=+58.271179587" lastFinishedPulling="2024-11-12 20:49:45.851960702 +0000 UTC m=+66.952949658" observedRunningTime="2024-11-12 20:49:46.3224559 +0000 UTC m=+67.423444865" watchObservedRunningTime="2024-11-12 20:49:48.785418501 +0000 UTC m=+69.886407475" Nov 12 20:49:52.734466 systemd[1]: Started sshd@14-143.198.78.43:22-139.178.68.195:52402.service - OpenSSH per-connection server daemon (139.178.68.195:52402). Nov 12 20:49:52.910813 sshd[5348]: Accepted publickey for core from 139.178.68.195 port 52402 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:52.917767 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:52.928078 systemd-logind[1449]: New session 15 of user core. Nov 12 20:49:52.934461 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:49:53.320744 sshd[5348]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:53.331509 systemd[1]: sshd@14-143.198.78.43:22-139.178.68.195:52402.service: Deactivated successfully. Nov 12 20:49:53.341494 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:49:53.344768 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:49:53.346807 systemd-logind[1449]: Removed session 15. Nov 12 20:49:54.274430 kubelet[2601]: E1112 20:49:54.273807 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:54.335102 kubelet[2601]: I1112 20:49:54.334808 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8h2nz" podStartSLOduration=40.306897212 podStartE2EDuration="52.334738286s" podCreationTimestamp="2024-11-12 20:49:02 +0000 UTC" firstStartedPulling="2024-11-12 20:49:36.051232127 +0000 UTC m=+57.152221065" lastFinishedPulling="2024-11-12 20:49:48.079072745 +0000 UTC m=+69.180062139" observedRunningTime="2024-11-12 20:49:49.352016958 +0000 UTC m=+70.453005922" watchObservedRunningTime="2024-11-12 20:49:54.334738286 +0000 UTC m=+75.435727292" Nov 12 20:49:58.351640 systemd[1]: Started sshd@15-143.198.78.43:22-139.178.68.195:51682.service - OpenSSH per-connection server daemon (139.178.68.195:51682). Nov 12 20:49:58.526932 sshd[5385]: Accepted publickey for core from 139.178.68.195 port 51682 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:58.536788 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:58.554000 systemd-logind[1449]: New session 16 of user core. Nov 12 20:49:58.562194 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:49:59.212541 sshd[5385]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:59.231948 systemd[1]: sshd@15-143.198.78.43:22-139.178.68.195:51682.service: Deactivated successfully. Nov 12 20:49:59.243590 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:49:59.253307 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:49:59.262322 systemd[1]: Started sshd@16-143.198.78.43:22-139.178.68.195:51688.service - OpenSSH per-connection server daemon (139.178.68.195:51688). Nov 12 20:49:59.266071 systemd-logind[1449]: Removed session 16. Nov 12 20:49:59.409523 sshd[5400]: Accepted publickey for core from 139.178.68.195 port 51688 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:59.414078 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:59.435445 systemd-logind[1449]: New session 17 of user core. Nov 12 20:49:59.444500 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:50:00.130233 sshd[5400]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:00.202349 systemd[1]: sshd@16-143.198.78.43:22-139.178.68.195:51688.service: Deactivated successfully. Nov 12 20:50:00.208674 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:50:00.220949 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:50:00.264951 systemd[1]: Started sshd@17-143.198.78.43:22-139.178.68.195:51698.service - OpenSSH per-connection server daemon (139.178.68.195:51698). Nov 12 20:50:00.290921 systemd-logind[1449]: Removed session 17. Nov 12 20:50:00.416526 sshd[5420]: Accepted publickey for core from 139.178.68.195 port 51698 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:00.419993 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:00.458544 systemd-logind[1449]: New session 18 of user core. Nov 12 20:50:00.479485 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:50:01.213300 kubelet[2601]: E1112 20:50:01.213234 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:01.234436 kubelet[2601]: E1112 20:50:01.234369 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:05.206071 kubelet[2601]: E1112 20:50:05.206003 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:05.883836 sshd[5420]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:05.899177 systemd[1]: sshd@17-143.198.78.43:22-139.178.68.195:51698.service: Deactivated successfully. Nov 12 20:50:05.906262 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:50:05.912045 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:50:05.916767 systemd-logind[1449]: Removed session 18. Nov 12 20:50:05.929347 systemd[1]: Started sshd@18-143.198.78.43:22-139.178.68.195:48094.service - OpenSSH per-connection server daemon (139.178.68.195:48094). Nov 12 20:50:06.163406 sshd[5439]: Accepted publickey for core from 139.178.68.195 port 48094 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:06.170284 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:06.190962 systemd-logind[1449]: New session 19 of user core. Nov 12 20:50:06.198338 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:50:07.701591 sshd[5439]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:07.737932 systemd[1]: Started sshd@19-143.198.78.43:22-139.178.68.195:48106.service - OpenSSH per-connection server daemon (139.178.68.195:48106). Nov 12 20:50:07.740772 systemd[1]: sshd@18-143.198.78.43:22-139.178.68.195:48094.service: Deactivated successfully. Nov 12 20:50:07.753935 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:50:07.771001 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:50:07.781673 systemd-logind[1449]: Removed session 19. Nov 12 20:50:07.866956 sshd[5451]: Accepted publickey for core from 139.178.68.195 port 48106 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:07.868832 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:07.888086 systemd-logind[1449]: New session 20 of user core. Nov 12 20:50:07.896215 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:50:08.390933 sshd[5451]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:08.400811 systemd[1]: sshd@19-143.198.78.43:22-139.178.68.195:48106.service: Deactivated successfully. Nov 12 20:50:08.414924 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:50:08.418217 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:50:08.423963 systemd-logind[1449]: Removed session 20. Nov 12 20:50:13.410758 systemd[1]: Started sshd@20-143.198.78.43:22-139.178.68.195:48122.service - OpenSSH per-connection server daemon (139.178.68.195:48122). Nov 12 20:50:13.511194 kubelet[2601]: I1112 20:50:13.510950 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:50:13.518666 sshd[5469]: Accepted publickey for core from 139.178.68.195 port 48122 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:13.524721 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:13.547662 systemd-logind[1449]: New session 21 of user core. Nov 12 20:50:13.554974 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:50:13.808950 sshd[5469]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:13.817439 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:50:13.817603 systemd[1]: sshd@20-143.198.78.43:22-139.178.68.195:48122.service: Deactivated successfully. Nov 12 20:50:13.825425 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:50:13.833735 systemd-logind[1449]: Removed session 21. Nov 12 20:50:18.200901 kubelet[2601]: E1112 20:50:18.199948 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:18.847048 systemd[1]: Started sshd@21-143.198.78.43:22-139.178.68.195:55992.service - OpenSSH per-connection server daemon (139.178.68.195:55992). Nov 12 20:50:18.931299 sshd[5503]: Accepted publickey for core from 139.178.68.195 port 55992 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:18.933273 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:18.943765 systemd-logind[1449]: New session 22 of user core. Nov 12 20:50:18.948345 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:50:19.219335 sshd[5503]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:19.225663 systemd[1]: sshd@21-143.198.78.43:22-139.178.68.195:55992.service: Deactivated successfully. Nov 12 20:50:19.231392 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:50:19.236759 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:50:19.239228 systemd-logind[1449]: Removed session 22. Nov 12 20:50:24.248226 systemd[1]: Started sshd@22-143.198.78.43:22-139.178.68.195:56008.service - OpenSSH per-connection server daemon (139.178.68.195:56008). Nov 12 20:50:24.361534 sshd[5565]: Accepted publickey for core from 139.178.68.195 port 56008 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:24.364446 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:24.386575 systemd-logind[1449]: New session 23 of user core. Nov 12 20:50:24.396179 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:50:24.681204 sshd[5565]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:24.688077 systemd[1]: sshd@22-143.198.78.43:22-139.178.68.195:56008.service: Deactivated successfully. Nov 12 20:50:24.692973 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:50:24.699775 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:50:24.712615 systemd-logind[1449]: Removed session 23.