Jan 14 13:04:56.106154 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 14 13:04:56.106190 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.106204 kernel: BIOS-provided physical RAM map: Jan 14 13:04:56.106214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:04:56.106224 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:04:56.106234 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:04:56.106255 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:04:56.106266 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:04:56.106279 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:04:56.106290 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:04:56.106300 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:04:56.106310 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:04:56.106321 kernel: NX (Execute Disable) protection: active Jan 14 13:04:56.106332 kernel: APIC: Static calls initialized Jan 14 13:04:56.106347 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:04:56.106360 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:04:56.106372 kernel: random: crng init done Jan 14 13:04:56.106383 kernel: secureboot: Secure boot disabled Jan 14 13:04:56.106394 kernel: SMBIOS 3.1.0 present. Jan 14 13:04:56.106406 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:04:56.106418 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:04:56.106430 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:04:56.106442 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:04:56.106453 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:04:56.106467 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:04:56.106479 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:04:56.106491 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:56.106503 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:56.106515 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:04:56.106527 kernel: tsc: Detected 2593.904 MHz processor Jan 14 13:04:56.106539 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:04:56.106551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:04:56.106563 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:04:56.106577 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:04:56.106589 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:04:56.106601 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:04:56.106613 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:04:56.106624 kernel: Using GB pages for direct mapping Jan 14 13:04:56.106636 kernel: ACPI: Early table checksum verification disabled Jan 14 13:04:56.106648 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:04:56.106665 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106680 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106693 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:04:56.106705 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:04:56.106718 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106731 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106743 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106758 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106771 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106784 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106796 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106809 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:04:56.106821 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:04:56.106833 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:04:56.106845 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:04:56.106858 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:04:56.106874 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:04:56.106888 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:04:56.106901 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:04:56.106915 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:04:56.106927 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:04:56.106938 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:04:56.106950 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:04:56.106961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:04:56.106973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:04:56.106987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:04:56.106998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:04:56.107010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:04:56.107022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:04:56.107033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:04:56.107045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:04:56.107057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:04:56.107069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:04:56.107084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:04:56.107096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:04:56.107108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:04:56.107120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:04:56.107133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:04:56.107145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:04:56.107158 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:04:56.107171 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:04:56.107183 kernel: Zone ranges: Jan 14 13:04:56.107198 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:04:56.107211 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:04:56.107223 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:56.107236 kernel: Movable zone start for each node Jan 14 13:04:56.107282 kernel: Early memory node ranges Jan 14 13:04:56.107295 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:04:56.107308 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:04:56.107321 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:04:56.107333 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:56.107349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:04:56.107361 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:04:56.107374 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:04:56.107387 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:04:56.107399 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:04:56.107412 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:04:56.107425 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:04:56.107439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:04:56.107452 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:04:56.107467 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:04:56.107480 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:04:56.107494 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:04:56.107507 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:04:56.107521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:04:56.107534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:04:56.107547 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:04:56.107560 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:04:56.107572 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:04:56.107586 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:04:56.107599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:04:56.107614 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.107627 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:04:56.107639 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:04:56.107651 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:04:56.107663 kernel: Fallback order for Node 0: 0 Jan 14 13:04:56.107675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:04:56.107691 kernel: Policy zone: Normal Jan 14 13:04:56.107712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:04:56.107725 kernel: software IO TLB: area num 2. Jan 14 13:04:56.107741 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Jan 14 13:04:56.107754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:04:56.107767 kernel: ftrace: allocating 37890 entries in 149 pages Jan 14 13:04:56.107780 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:04:56.107792 kernel: Dynamic Preempt: voluntary Jan 14 13:04:56.107805 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:04:56.107819 kernel: rcu: RCU event tracing is enabled. Jan 14 13:04:56.107832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:04:56.107847 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:04:56.107860 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:04:56.107873 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:04:56.107886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:04:56.107899 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:04:56.107912 kernel: Using NULL legacy PIC Jan 14 13:04:56.107927 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:04:56.107940 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:04:56.107953 kernel: Console: colour dummy device 80x25 Jan 14 13:04:56.107965 kernel: printk: console [tty1] enabled Jan 14 13:04:56.107978 kernel: printk: console [ttyS0] enabled Jan 14 13:04:56.107991 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:04:56.108004 kernel: ACPI: Core revision 20230628 Jan 14 13:04:56.108017 kernel: Failed to register legacy timer interrupt Jan 14 13:04:56.108030 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:04:56.108045 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:04:56.108058 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:04:56.108071 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:04:56.108084 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:04:56.108097 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:04:56.108110 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:04:56.108123 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:04:56.108136 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:04:56.108150 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 14 13:04:56.108165 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:04:56.108178 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:04:56.108192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:04:56.108204 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:04:56.108217 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:04:56.108229 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:04:56.108250 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:04:56.108264 kernel: RETBleed: Vulnerable Jan 14 13:04:56.108276 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:04:56.108289 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:56.108305 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:56.108318 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:04:56.108330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:04:56.108343 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:04:56.108356 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:04:56.108369 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:04:56.108381 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:04:56.108394 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:04:56.108407 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:04:56.108420 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:04:56.108432 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:04:56.108448 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:04:56.108461 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:04:56.108473 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:04:56.108486 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:04:56.108499 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:04:56.108511 kernel: landlock: Up and running. Jan 14 13:04:56.108524 kernel: SELinux: Initializing. Jan 14 13:04:56.108537 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.108550 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.108563 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:04:56.108576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108592 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108605 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108618 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:04:56.108631 kernel: signal: max sigframe size: 3632 Jan 14 13:04:56.108644 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:04:56.108657 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:04:56.108670 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:04:56.108683 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:04:56.108696 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:04:56.108711 kernel: .... node #0, CPUs: #1 Jan 14 13:04:56.108724 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:04:56.108738 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:04:56.108751 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:04:56.108764 kernel: smpboot: Max logical packages: 1 Jan 14 13:04:56.108777 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 14 13:04:56.108789 kernel: devtmpfs: initialized Jan 14 13:04:56.108802 kernel: x86/mm: Memory block size: 128MB Jan 14 13:04:56.108815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:04:56.108831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:04:56.108844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:04:56.108858 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:04:56.108870 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:04:56.108884 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:04:56.108898 kernel: audit: type=2000 audit(1736859894.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:04:56.108911 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:04:56.108931 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:04:56.108956 kernel: cpuidle: using governor menu Jan 14 13:04:56.108983 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:04:56.109005 kernel: dca service started, version 1.12.1 Jan 14 13:04:56.109023 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:04:56.109037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:04:56.109051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:04:56.109066 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:04:56.109080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:04:56.109094 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:04:56.109111 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:04:56.109125 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:04:56.109139 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:04:56.109153 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:04:56.109168 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:04:56.109182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:04:56.109196 kernel: ACPI: Interpreter enabled Jan 14 13:04:56.109210 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:04:56.109224 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:04:56.109241 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:04:56.109785 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:04:56.109801 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:04:56.109816 kernel: iommu: Default domain type: Translated Jan 14 13:04:56.109830 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:04:56.109844 kernel: efivars: Registered efivars operations Jan 14 13:04:56.109859 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:04:56.109873 kernel: PCI: System does not support PCI Jan 14 13:04:56.109887 kernel: vgaarb: loaded Jan 14 13:04:56.109902 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:04:56.109920 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:04:56.109934 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:04:56.109948 kernel: pnp: PnP ACPI init Jan 14 13:04:56.109962 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:04:56.109977 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:04:56.109991 kernel: NET: Registered PF_INET protocol family Jan 14 13:04:56.110005 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:04:56.110020 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:04:56.110037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:04:56.110051 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:04:56.110066 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:04:56.110080 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:04:56.110094 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.110109 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.110123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:04:56.110137 kernel: NET: Registered PF_XDP protocol family Jan 14 13:04:56.110151 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:04:56.110168 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:04:56.110182 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:04:56.110196 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:04:56.110211 kernel: Initialise system trusted keyrings Jan 14 13:04:56.110224 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:04:56.110239 kernel: Key type asymmetric registered Jan 14 13:04:56.110263 kernel: Asymmetric key parser 'x509' registered Jan 14 13:04:56.110277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:04:56.110291 kernel: io scheduler mq-deadline registered Jan 14 13:04:56.110308 kernel: io scheduler kyber registered Jan 14 13:04:56.110323 kernel: io scheduler bfq registered Jan 14 13:04:56.110337 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:04:56.110351 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:04:56.110365 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:04:56.110379 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:04:56.110394 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:04:56.110565 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:04:56.110687 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:04:55 UTC (1736859895) Jan 14 13:04:56.110829 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:04:56.110847 kernel: intel_pstate: CPU model not supported Jan 14 13:04:56.110862 kernel: efifb: probing for efifb Jan 14 13:04:56.110873 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:04:56.110886 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:04:56.110900 kernel: efifb: scrolling: redraw Jan 14 13:04:56.110914 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:04:56.110927 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:04:56.110942 kernel: fb0: EFI VGA frame buffer device Jan 14 13:04:56.110954 kernel: pstore: Using crash dump compression: deflate Jan 14 13:04:56.110968 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:04:56.110980 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:04:56.110991 kernel: Segment Routing with IPv6 Jan 14 13:04:56.111010 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:04:56.111028 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:04:56.111040 kernel: Key type dns_resolver registered Jan 14 13:04:56.111052 kernel: IPI shorthand broadcast: enabled Jan 14 13:04:56.111068 kernel: sched_clock: Marking stable (911003000, 50301400)->(1206683400, -245379000) Jan 14 13:04:56.111081 kernel: registered taskstats version 1 Jan 14 13:04:56.111095 kernel: Loading compiled-in X.509 certificates Jan 14 13:04:56.111107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 14 13:04:56.111119 kernel: Key type .fscrypt registered Jan 14 13:04:56.111132 kernel: Key type fscrypt-provisioning registered Jan 14 13:04:56.111145 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:04:56.111159 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:04:56.111173 kernel: ima: No architecture policies found Jan 14 13:04:56.111189 kernel: clk: Disabling unused clocks Jan 14 13:04:56.111202 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 14 13:04:56.111217 kernel: Write protecting the kernel read-only data: 38912k Jan 14 13:04:56.111231 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 14 13:04:56.111264 kernel: Run /init as init process Jan 14 13:04:56.111278 kernel: with arguments: Jan 14 13:04:56.111292 kernel: /init Jan 14 13:04:56.111306 kernel: with environment: Jan 14 13:04:56.111319 kernel: HOME=/ Jan 14 13:04:56.111336 kernel: TERM=linux Jan 14 13:04:56.111350 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:04:56.111368 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:04:56.111385 systemd[1]: Detected virtualization microsoft. Jan 14 13:04:56.111400 systemd[1]: Detected architecture x86-64. Jan 14 13:04:56.111415 systemd[1]: Running in initrd. Jan 14 13:04:56.111429 systemd[1]: No hostname configured, using default hostname. Jan 14 13:04:56.111448 systemd[1]: Hostname set to <localhost>. Jan 14 13:04:56.111466 systemd[1]: Initializing machine ID from random generator. Jan 14 13:04:56.111481 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:04:56.111496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:04:56.111512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:04:56.111528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:04:56.111543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:04:56.111558 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:04:56.111575 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:04:56.111591 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:04:56.111606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:04:56.111621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:04:56.111636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:04:56.111650 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:04:56.111665 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:04:56.111682 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:04:56.111697 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:04:56.111711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:04:56.111726 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:04:56.111741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:04:56.111755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:04:56.111767 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:04:56.111782 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:04:56.111796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:04:56.111813 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:04:56.111828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:04:56.111842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:04:56.111857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:04:56.111871 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:04:56.111885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:04:56.111900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:04:56.111939 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:04:56.111975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:56.111989 systemd-journald[177]: Journal started Jan 14 13:04:56.112026 systemd-journald[177]: Runtime Journal (/run/log/journal/9e9879eee8cc48e1a4c9d10c63dcb8c7) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:04:56.111049 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:04:56.122275 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:04:56.126701 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:04:56.133125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:04:56.140540 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:04:56.145242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:56.159264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:04:56.167541 kernel: Bridge firewalling registered Jan 14 13:04:56.164547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:56.167553 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:04:56.170373 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:04:56.176801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:04:56.177984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:04:56.194655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:04:56.198135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:04:56.215932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:04:56.220941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:04:56.231498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:04:56.239741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:04:56.244661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:56.258436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:04:56.273554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:04:56.283190 dracut-cmdline[215]: dracut-dracut-053 Jan 14 13:04:56.288124 systemd-resolved[210]: Positive Trust Anchors: Jan 14 13:04:56.293338 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.288142 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:04:56.288207 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:04:56.296556 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 14 13:04:56.297555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:04:56.312683 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:04:56.369271 kernel: SCSI subsystem initialized Jan 14 13:04:56.380275 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:04:56.391272 kernel: iscsi: registered transport (tcp) Jan 14 13:04:56.413384 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:04:56.413474 kernel: QLogic iSCSI HBA Driver Jan 14 13:04:56.449229 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:04:56.459451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:04:56.488089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:04:56.488178 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:04:56.491637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:04:56.533276 kernel: raid6: avx512x4 gen() 18522 MB/s Jan 14 13:04:56.552263 kernel: raid6: avx512x2 gen() 18581 MB/s Jan 14 13:04:56.571260 kernel: raid6: avx512x1 gen() 18600 MB/s Jan 14 13:04:56.591263 kernel: raid6: avx2x4 gen() 18465 MB/s Jan 14 13:04:56.610259 kernel: raid6: avx2x2 gen() 18502 MB/s Jan 14 13:04:56.630868 kernel: raid6: avx2x1 gen() 13870 MB/s Jan 14 13:04:56.630931 kernel: raid6: using algorithm avx512x1 gen() 18600 MB/s Jan 14 13:04:56.653090 kernel: raid6: .... xor() 26625 MB/s, rmw enabled Jan 14 13:04:56.653152 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:04:56.677273 kernel: xor: automatically using best checksumming function avx Jan 14 13:04:56.818275 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:04:56.828155 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:04:56.837551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:04:56.851776 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 14 13:04:56.856214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:04:56.871424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:04:56.885304 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 14 13:04:56.914528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:04:56.929410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:04:56.969069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:04:56.985468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:04:57.006704 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:04:57.014821 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:04:57.025019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:04:57.032451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:04:57.045485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:04:57.064383 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:04:57.080959 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:04:57.089667 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:04:57.089693 kernel: AES CTR mode by8 optimization enabled Jan 14 13:04:57.098273 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:04:57.108372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:04:57.108603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:57.112209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:57.126052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:57.129186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.136112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.154371 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:04:57.154430 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jan 14 13:04:57.154450 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:04:57.154467 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:04:57.152021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.783445 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 13:04:57.783475 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:04:57.783635 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:04:57.783648 kernel: scsi host1: storvsc_host_t Jan 14 13:04:57.783779 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:04:57.783794 kernel: scsi host0: storvsc_host_t Jan 14 13:04:57.783911 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 13:04:57.783927 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:04:57.784044 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:04:57.784160 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:04:57.784174 kernel: PTP clock support registered Jan 14 13:04:57.784185 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:04:57.784198 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:04:57.784214 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:04:57.784229 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:04:57.784242 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:04:57.739293 systemd-resolved[210]: Clock change detected. Flushing caches. Jan 14 13:04:57.793030 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 14 13:04:57.799436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:04:57.799460 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:04:57.803063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:57.803169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.819935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.833764 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:04:57.848711 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:04:57.848862 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 14 13:04:57.848978 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:04:57.849094 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:04:57.849222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:57.849237 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 14 13:04:57.848577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.866296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:57.892950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:57.910789 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: VF slot 1 added Jan 14 13:04:57.917717 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:04:57.923706 kernel: hv_pci 51bddd3c-eb2a-46e7-bb62-c89a827f5def: PCI VMBus probing: Using version 0x10004 Jan 14 13:04:57.972977 kernel: hv_pci 51bddd3c-eb2a-46e7-bb62-c89a827f5def: PCI host bridge to bus eb2a:00 Jan 14 13:04:57.973153 kernel: pci_bus eb2a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:04:57.973339 kernel: pci_bus eb2a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:04:57.973497 kernel: pci eb2a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:04:57.973678 kernel: pci eb2a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:04:57.973886 kernel: pci eb2a:00:02.0: enabling Extended Tags Jan 14 13:04:57.974059 kernel: pci eb2a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at eb2a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:04:57.974217 kernel: pci_bus eb2a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:04:57.974366 kernel: pci eb2a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:04:58.135399 kernel: mlx5_core eb2a:00:02.0: enabling device (0000 -> 0002) Jan 14 13:04:58.365027 kernel: mlx5_core eb2a:00:02.0: firmware version: 14.30.5000 Jan 14 13:04:58.365241 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: VF registering: eth1 Jan 14 13:04:58.365405 kernel: mlx5_core eb2a:00:02.0 eth1: joined to eth0 Jan 14 13:04:58.365586 kernel: mlx5_core eb2a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:04:58.337097 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:04:58.373773 kernel: mlx5_core eb2a:00:02.0 enP60202s1: renamed from eth1 Jan 14 13:04:58.441717 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (444) Jan 14 13:04:58.459806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:04:58.471712 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (448) Jan 14 13:04:58.482175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:04:58.506631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:04:58.513762 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:04:58.527920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:04:58.544199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:58.550714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.558661 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.559384 disk-uuid[603]: The operation has completed successfully. Jan 14 13:04:59.649334 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:04:59.649459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:04:59.663865 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:04:59.672601 sh[689]: Success Jan 14 13:04:59.702724 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:04:59.935345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:04:59.949858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:04:59.955437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:04:59.973631 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 14 13:04:59.973722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:04:59.977308 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:04:59.980497 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:04:59.983272 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:05:00.324599 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:05:00.328152 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:05:00.335956 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:05:00.341842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:05:00.365240 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:00.365307 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:00.367856 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:00.414716 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:00.424263 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:05:00.431803 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:00.432611 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:00.443896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:00.449292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:05:00.468018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:05:00.470448 systemd-networkd[871]: lo: Link UP Jan 14 13:05:00.470453 systemd-networkd[871]: lo: Gained carrier Jan 14 13:05:00.472968 systemd-networkd[871]: Enumeration completed Jan 14 13:05:00.473339 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:00.476109 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:00.476112 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:00.477426 systemd[1]: Reached target network.target - Network. Jan 14 13:05:00.550720 kernel: mlx5_core eb2a:00:02.0 enP60202s1: Link up Jan 14 13:05:00.581485 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: Data path switched to VF: enP60202s1 Jan 14 13:05:00.581029 systemd-networkd[871]: enP60202s1: Link UP Jan 14 13:05:00.581178 systemd-networkd[871]: eth0: Link UP Jan 14 13:05:00.581378 systemd-networkd[871]: eth0: Gained carrier Jan 14 13:05:00.581394 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:00.593760 systemd-networkd[871]: enP60202s1: Gained carrier Jan 14 13:05:00.622756 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:01.537393 ignition[873]: Ignition 2.20.0 Jan 14 13:05:01.537406 ignition[873]: Stage: fetch-offline Jan 14 13:05:01.537451 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.537461 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.537568 ignition[873]: parsed url from cmdline: "" Jan 14 13:05:01.537572 ignition[873]: no config URL provided Jan 14 13:05:01.537579 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:01.537589 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:01.537596 ignition[873]: failed to fetch config: resource requires networking Jan 14 13:05:01.542372 ignition[873]: Ignition finished successfully Jan 14 13:05:01.559435 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:01.570893 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:05:01.584385 ignition[883]: Ignition 2.20.0 Jan 14 13:05:01.584396 ignition[883]: Stage: fetch Jan 14 13:05:01.584605 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.584618 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.584749 ignition[883]: parsed url from cmdline: "" Jan 14 13:05:01.584753 ignition[883]: no config URL provided Jan 14 13:05:01.584758 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:01.584766 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:01.584793 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:05:01.679232 ignition[883]: GET result: OK Jan 14 13:05:01.679376 ignition[883]: config has been read from IMDS userdata Jan 14 13:05:01.680940 ignition[883]: parsing config with SHA512: 43a201e4375d15f8d2efdd3db9f1472f1d4bb8e99a5ba5f970a41906d32be78e4f58ccf2344587088a35d51d76eebf9b72ebf4be85564fe6006ace81bb8a9a2d Jan 14 13:05:01.686060 unknown[883]: fetched base config from "system" Jan 14 13:05:01.686073 unknown[883]: fetched base config from "system" Jan 14 13:05:01.686500 ignition[883]: fetch: fetch complete Jan 14 13:05:01.686080 unknown[883]: fetched user config from "azure" Jan 14 13:05:01.686506 ignition[883]: fetch: fetch passed Jan 14 13:05:01.688186 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:05:01.686551 ignition[883]: Ignition finished successfully Jan 14 13:05:01.702870 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:05:01.718190 ignition[889]: Ignition 2.20.0 Jan 14 13:05:01.718203 ignition[889]: Stage: kargs Jan 14 13:05:01.720436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:05:01.718420 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.718434 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.719319 ignition[889]: kargs: kargs passed Jan 14 13:05:01.719370 ignition[889]: Ignition finished successfully Jan 14 13:05:01.735249 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:05:01.758767 ignition[895]: Ignition 2.20.0 Jan 14 13:05:01.758780 ignition[895]: Stage: disks Jan 14 13:05:01.761032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:05:01.759027 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.765774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:01.759042 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.771660 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:05:01.760012 ignition[895]: disks: disks passed Jan 14 13:05:01.777467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:01.760059 ignition[895]: Ignition finished successfully Jan 14 13:05:01.778047 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:01.788379 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:01.806255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:05:01.879845 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:05:01.885510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:05:01.896841 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:05:01.992718 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 14 13:05:01.993444 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:05:01.996765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:02.039831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:02.045439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:05:02.058730 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Jan 14 13:05:02.058785 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.062835 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:05:02.077085 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.077114 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:02.077128 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:02.070405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:05:02.070442 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:02.092396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:02.095026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:05:02.105866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:05:02.462883 systemd-networkd[871]: eth0: Gained IPv6LL Jan 14 13:05:02.590945 systemd-networkd[871]: enP60202s1: Gained IPv6LL Jan 14 13:05:02.778314 coreos-metadata[916]: Jan 14 13:05:02.778 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:02.785018 coreos-metadata[916]: Jan 14 13:05:02.784 INFO Fetch successful Jan 14 13:05:02.787902 coreos-metadata[916]: Jan 14 13:05:02.785 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:02.800741 coreos-metadata[916]: Jan 14 13:05:02.800 INFO Fetch successful Jan 14 13:05:02.814431 coreos-metadata[916]: Jan 14 13:05:02.814 INFO wrote hostname ci-4186.1.0-a-f264a924af to /sysroot/etc/hostname Jan 14 13:05:02.819806 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:05:02.822041 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:02.933143 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:05:02.938573 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:05:02.958043 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:05:03.710399 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:03.721813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:05:03.729879 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:05:03.736641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:05:03.742819 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:03.765026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:05:03.773861 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:05:03.776177 ignition[1033]: INFO : Stage: mount Jan 14 13:05:03.776177 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.776177 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.786720 ignition[1033]: INFO : mount: mount passed Jan 14 13:05:03.786720 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:05:03.777475 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:05:03.794122 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:05:03.802838 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:03.833383 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 14 13:05:03.833456 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:03.838310 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:03.840911 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:03.887960 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:03.889851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:03.912902 ignition[1062]: INFO : Ignition 2.20.0 Jan 14 13:05:03.915335 ignition[1062]: INFO : Stage: files Jan 14 13:05:03.915335 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.915335 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.923499 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:05:03.930479 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:05:03.934497 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:05:04.024540 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:05:04.029961 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:05:04.034723 unknown[1062]: wrote ssh authorized keys file for user: core Jan 14 13:05:04.037645 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:05:04.077978 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:04.084183 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:04.123593 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:05:04.272236 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:04.278240 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:04.278240 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:04.287846 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:05:04.801536 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 13:05:05.301320 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:05.301320 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 13:05:05.351485 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: files passed Jan 14 13:05:05.360144 ignition[1062]: INFO : Ignition finished successfully Jan 14 13:05:05.353831 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:05:05.378631 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:05:05.397919 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:05:05.410561 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:05:05.410721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:05:05.438676 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.438676 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.455038 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.442827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:05.447717 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:05:05.466533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:05:05.503515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:05:05.503657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:05:05.510262 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:05:05.516515 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:05:05.527065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:05:05.532994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:05:05.547052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:05.557894 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:05:05.571249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:05.572422 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:05.572875 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:05:05.573329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:05:05.573450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:05.574248 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:05:05.574826 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:05:05.575214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:05:05.575680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:05.576141 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:05.576593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:05:05.577070 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:05:05.577660 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:05:05.578220 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:05:05.578671 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:05:05.579547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:05:05.579685 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:05:05.580605 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:05.581108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:05.581527 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:05:05.703488 ignition[1114]: INFO : Ignition 2.20.0 Jan 14 13:05:05.703488 ignition[1114]: INFO : Stage: umount Jan 14 13:05:05.703488 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:05.703488 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:05.703488 ignition[1114]: INFO : umount: umount passed Jan 14 13:05:05.703488 ignition[1114]: INFO : Ignition finished successfully Jan 14 13:05:05.583475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:05.628755 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:05:05.634753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:05:05.656710 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:05:05.665964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:05.667178 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:05:05.667291 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:05:05.667565 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:05:05.667651 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:05.685655 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:05:05.722585 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:05:05.755436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:05:05.755663 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:05.759152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:05:05.759301 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:05:05.775882 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:05:05.780677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:05:05.787666 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:05:05.789086 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:05:05.789351 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:05:05.798281 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:05:05.798353 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:05:05.803866 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:05:05.806336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:05:05.816569 systemd[1]: Stopped target network.target - Network. Jan 14 13:05:05.819055 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:05:05.819123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:05.827740 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:05:05.837257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:05:05.843764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:05.851190 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:05:05.853868 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:05:05.861435 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:05:05.861500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:05:05.868724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:05:05.868787 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:05:05.876534 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:05:05.879185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:05:05.884482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:05:05.884566 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:05:05.892997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:05:05.893303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:05:05.902141 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:05:05.902238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:05:05.913877 systemd-networkd[871]: eth0: DHCPv6 lease lost Jan 14 13:05:05.915054 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:05:05.915171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:05:05.922265 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:05:05.922416 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:05:05.936045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:05:05.936132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:05.951821 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:05:05.955366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:05:05.955435 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:05.968059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:05:05.968146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:05.976242 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:05:05.976316 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:05.987666 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:05:05.987764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:05.994182 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:06.016444 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:05:06.016607 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:06.023027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:05:06.023072 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:06.027456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:05:06.027504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:06.027867 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:05:06.027909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:05:06.034747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:05:06.034799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:05:06.055713 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: Data path switched from VF: enP60202s1 Jan 14 13:05:06.065057 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:05:06.065152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:05:06.079170 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:05:06.082460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:05:06.082549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:06.089457 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:05:06.092585 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:06.096391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:05:06.099948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:06.106835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:05:06.111354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:06.128457 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:05:06.128599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:05:06.134177 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:05:06.134276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:05:06.286875 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:05:06.287034 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:05:06.291248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:05:06.291316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:05:06.291370 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:06.308976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:05:06.677235 systemd[1]: Switching root. Jan 14 13:05:06.760086 systemd-journald[177]: Journal stopped Jan 14 13:04:56.106154 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 14 13:04:56.106190 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.106204 kernel: BIOS-provided physical RAM map: Jan 14 13:04:56.106214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:04:56.106224 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:04:56.106234 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:04:56.106255 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:04:56.106266 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:04:56.106279 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:04:56.106290 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:04:56.106300 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:04:56.106310 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:04:56.106321 kernel: NX (Execute Disable) protection: active Jan 14 13:04:56.106332 kernel: APIC: Static calls initialized Jan 14 13:04:56.106347 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:04:56.106360 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:04:56.106372 kernel: random: crng init done Jan 14 13:04:56.106383 kernel: secureboot: Secure boot disabled Jan 14 13:04:56.106394 kernel: SMBIOS 3.1.0 present. Jan 14 13:04:56.106406 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:04:56.106418 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:04:56.106430 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:04:56.106442 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:04:56.106453 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:04:56.106467 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:04:56.106479 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:04:56.106491 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:56.106503 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:56.106515 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:04:56.106527 kernel: tsc: Detected 2593.904 MHz processor Jan 14 13:04:56.106539 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:04:56.106551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:04:56.106563 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:04:56.106577 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:04:56.106589 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:04:56.106601 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:04:56.106613 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:04:56.106624 kernel: Using GB pages for direct mapping Jan 14 13:04:56.106636 kernel: ACPI: Early table checksum verification disabled Jan 14 13:04:56.106648 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:04:56.106665 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106680 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106693 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:04:56.106705 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:04:56.106718 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106731 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106743 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106758 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106771 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106784 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106796 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:56.106809 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:04:56.106821 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:04:56.106833 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:04:56.106845 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:04:56.106858 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:04:56.106874 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:04:56.106888 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:04:56.106901 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:04:56.106915 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:04:56.106927 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:04:56.106938 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:04:56.106950 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:04:56.106961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:04:56.106973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:04:56.106987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:04:56.106998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:04:56.107010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:04:56.107022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:04:56.107033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:04:56.107045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:04:56.107057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:04:56.107069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:04:56.107084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:04:56.107096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:04:56.107108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:04:56.107120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:04:56.107133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:04:56.107145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:04:56.107158 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:04:56.107171 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:04:56.107183 kernel: Zone ranges: Jan 14 13:04:56.107198 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:04:56.107211 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:04:56.107223 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:56.107236 kernel: Movable zone start for each node Jan 14 13:04:56.107282 kernel: Early memory node ranges Jan 14 13:04:56.107295 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:04:56.107308 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:04:56.107321 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:04:56.107333 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:56.107349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:04:56.107361 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:04:56.107374 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:04:56.107387 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:04:56.107399 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:04:56.107412 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:04:56.107425 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:04:56.107439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:04:56.107452 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:04:56.107467 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:04:56.107480 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:04:56.107494 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:04:56.107507 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:04:56.107521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:04:56.107534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:04:56.107547 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:04:56.107560 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:04:56.107572 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:04:56.107586 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:04:56.107599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:04:56.107614 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.107627 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:04:56.107639 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:04:56.107651 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:04:56.107663 kernel: Fallback order for Node 0: 0 Jan 14 13:04:56.107675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:04:56.107691 kernel: Policy zone: Normal Jan 14 13:04:56.107712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:04:56.107725 kernel: software IO TLB: area num 2. Jan 14 13:04:56.107741 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Jan 14 13:04:56.107754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:04:56.107767 kernel: ftrace: allocating 37890 entries in 149 pages Jan 14 13:04:56.107780 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:04:56.107792 kernel: Dynamic Preempt: voluntary Jan 14 13:04:56.107805 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:04:56.107819 kernel: rcu: RCU event tracing is enabled. Jan 14 13:04:56.107832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:04:56.107847 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:04:56.107860 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:04:56.107873 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:04:56.107886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:04:56.107899 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:04:56.107912 kernel: Using NULL legacy PIC Jan 14 13:04:56.107927 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:04:56.107940 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:04:56.107953 kernel: Console: colour dummy device 80x25 Jan 14 13:04:56.107965 kernel: printk: console [tty1] enabled Jan 14 13:04:56.107978 kernel: printk: console [ttyS0] enabled Jan 14 13:04:56.107991 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:04:56.108004 kernel: ACPI: Core revision 20230628 Jan 14 13:04:56.108017 kernel: Failed to register legacy timer interrupt Jan 14 13:04:56.108030 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:04:56.108045 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:04:56.108058 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:04:56.108071 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:04:56.108084 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:04:56.108097 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:04:56.108110 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:04:56.108123 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:04:56.108136 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:04:56.108150 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 14 13:04:56.108165 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:04:56.108178 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:04:56.108192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:04:56.108204 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:04:56.108217 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:04:56.108229 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:04:56.108250 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:04:56.108264 kernel: RETBleed: Vulnerable Jan 14 13:04:56.108276 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:04:56.108289 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:56.108305 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:56.108318 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:04:56.108330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:04:56.108343 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:04:56.108356 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:04:56.108369 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:04:56.108381 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:04:56.108394 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:04:56.108407 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:04:56.108420 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:04:56.108432 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:04:56.108448 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:04:56.108461 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:04:56.108473 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:04:56.108486 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:04:56.108499 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:04:56.108511 kernel: landlock: Up and running. Jan 14 13:04:56.108524 kernel: SELinux: Initializing. Jan 14 13:04:56.108537 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.108550 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.108563 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:04:56.108576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108592 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108605 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:56.108618 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:04:56.108631 kernel: signal: max sigframe size: 3632 Jan 14 13:04:56.108644 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:04:56.108657 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:04:56.108670 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:04:56.108683 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:04:56.108696 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:04:56.108711 kernel: .... node #0, CPUs: #1 Jan 14 13:04:56.108724 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:04:56.108738 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:04:56.108751 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:04:56.108764 kernel: smpboot: Max logical packages: 1 Jan 14 13:04:56.108777 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 14 13:04:56.108789 kernel: devtmpfs: initialized Jan 14 13:04:56.108802 kernel: x86/mm: Memory block size: 128MB Jan 14 13:04:56.108815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:04:56.108831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:04:56.108844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:04:56.108858 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:04:56.108870 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:04:56.108884 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:04:56.108898 kernel: audit: type=2000 audit(1736859894.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:04:56.108911 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:04:56.108931 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:04:56.108956 kernel: cpuidle: using governor menu Jan 14 13:04:56.108983 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:04:56.109005 kernel: dca service started, version 1.12.1 Jan 14 13:04:56.109023 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:04:56.109037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:04:56.109051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:04:56.109066 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:04:56.109080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:04:56.109094 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:04:56.109111 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:04:56.109125 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:04:56.109139 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:04:56.109153 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:04:56.109168 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:04:56.109182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:04:56.109196 kernel: ACPI: Interpreter enabled Jan 14 13:04:56.109210 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:04:56.109224 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:04:56.109241 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:04:56.109785 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:04:56.109801 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:04:56.109816 kernel: iommu: Default domain type: Translated Jan 14 13:04:56.109830 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:04:56.109844 kernel: efivars: Registered efivars operations Jan 14 13:04:56.109859 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:04:56.109873 kernel: PCI: System does not support PCI Jan 14 13:04:56.109887 kernel: vgaarb: loaded Jan 14 13:04:56.109902 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:04:56.109920 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:04:56.109934 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:04:56.109948 kernel: pnp: PnP ACPI init Jan 14 13:04:56.109962 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:04:56.109977 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:04:56.109991 kernel: NET: Registered PF_INET protocol family Jan 14 13:04:56.110005 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:04:56.110020 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:04:56.110037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:04:56.110051 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:04:56.110066 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:04:56.110080 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:04:56.110094 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.110109 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:56.110123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:04:56.110137 kernel: NET: Registered PF_XDP protocol family Jan 14 13:04:56.110151 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:04:56.110168 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:04:56.110182 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:04:56.110196 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:04:56.110211 kernel: Initialise system trusted keyrings Jan 14 13:04:56.110224 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:04:56.110239 kernel: Key type asymmetric registered Jan 14 13:04:56.110263 kernel: Asymmetric key parser 'x509' registered Jan 14 13:04:56.110277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:04:56.110291 kernel: io scheduler mq-deadline registered Jan 14 13:04:56.110308 kernel: io scheduler kyber registered Jan 14 13:04:56.110323 kernel: io scheduler bfq registered Jan 14 13:04:56.110337 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:04:56.110351 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:04:56.110365 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:04:56.110379 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:04:56.110394 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:04:56.110565 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:04:56.110687 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:04:55 UTC (1736859895) Jan 14 13:04:56.110829 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:04:56.110847 kernel: intel_pstate: CPU model not supported Jan 14 13:04:56.110862 kernel: efifb: probing for efifb Jan 14 13:04:56.110873 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:04:56.110886 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:04:56.110900 kernel: efifb: scrolling: redraw Jan 14 13:04:56.110914 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:04:56.110927 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:04:56.110942 kernel: fb0: EFI VGA frame buffer device Jan 14 13:04:56.110954 kernel: pstore: Using crash dump compression: deflate Jan 14 13:04:56.110968 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:04:56.110980 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:04:56.110991 kernel: Segment Routing with IPv6 Jan 14 13:04:56.111010 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:04:56.111028 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:04:56.111040 kernel: Key type dns_resolver registered Jan 14 13:04:56.111052 kernel: IPI shorthand broadcast: enabled Jan 14 13:04:56.111068 kernel: sched_clock: Marking stable (911003000, 50301400)->(1206683400, -245379000) Jan 14 13:04:56.111081 kernel: registered taskstats version 1 Jan 14 13:04:56.111095 kernel: Loading compiled-in X.509 certificates Jan 14 13:04:56.111107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 14 13:04:56.111119 kernel: Key type .fscrypt registered Jan 14 13:04:56.111132 kernel: Key type fscrypt-provisioning registered Jan 14 13:04:56.111145 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:04:56.111159 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:04:56.111173 kernel: ima: No architecture policies found Jan 14 13:04:56.111189 kernel: clk: Disabling unused clocks Jan 14 13:04:56.111202 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 14 13:04:56.111217 kernel: Write protecting the kernel read-only data: 38912k Jan 14 13:04:56.111231 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 14 13:04:56.111264 kernel: Run /init as init process Jan 14 13:04:56.111278 kernel: with arguments: Jan 14 13:04:56.111292 kernel: /init Jan 14 13:04:56.111306 kernel: with environment: Jan 14 13:04:56.111319 kernel: HOME=/ Jan 14 13:04:56.111336 kernel: TERM=linux Jan 14 13:04:56.111350 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:04:56.111368 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:04:56.111385 systemd[1]: Detected virtualization microsoft. Jan 14 13:04:56.111400 systemd[1]: Detected architecture x86-64. Jan 14 13:04:56.111415 systemd[1]: Running in initrd. Jan 14 13:04:56.111429 systemd[1]: No hostname configured, using default hostname. Jan 14 13:04:56.111448 systemd[1]: Hostname set to <localhost>. Jan 14 13:04:56.111466 systemd[1]: Initializing machine ID from random generator. Jan 14 13:04:56.111481 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:04:56.111496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:04:56.111512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:04:56.111528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:04:56.111543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:04:56.111558 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:04:56.111575 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:04:56.111591 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:04:56.111606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:04:56.111621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:04:56.111636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:04:56.111650 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:04:56.111665 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:04:56.111682 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:04:56.111697 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:04:56.111711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:04:56.111726 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:04:56.111741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:04:56.111755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:04:56.111767 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:04:56.111782 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:04:56.111796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:04:56.111813 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:04:56.111828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:04:56.111842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:04:56.111857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:04:56.111871 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:04:56.111885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:04:56.111900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:04:56.111939 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:04:56.111975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:56.111989 systemd-journald[177]: Journal started Jan 14 13:04:56.112026 systemd-journald[177]: Runtime Journal (/run/log/journal/9e9879eee8cc48e1a4c9d10c63dcb8c7) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:04:56.111049 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:04:56.122275 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:04:56.126701 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:04:56.133125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:04:56.140540 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:04:56.145242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:56.159264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:04:56.167541 kernel: Bridge firewalling registered Jan 14 13:04:56.164547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:56.167553 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:04:56.170373 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:04:56.176801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:04:56.177984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:04:56.194655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:04:56.198135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:04:56.215932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:04:56.220941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:04:56.231498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:04:56.239741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:04:56.244661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:56.258436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:04:56.273554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:04:56.283190 dracut-cmdline[215]: dracut-dracut-053 Jan 14 13:04:56.288124 systemd-resolved[210]: Positive Trust Anchors: Jan 14 13:04:56.293338 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:56.288142 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:04:56.288207 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:04:56.296556 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 14 13:04:56.297555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:04:56.312683 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:04:56.369271 kernel: SCSI subsystem initialized Jan 14 13:04:56.380275 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:04:56.391272 kernel: iscsi: registered transport (tcp) Jan 14 13:04:56.413384 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:04:56.413474 kernel: QLogic iSCSI HBA Driver Jan 14 13:04:56.449229 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:04:56.459451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:04:56.488089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:04:56.488178 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:04:56.491637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:04:56.533276 kernel: raid6: avx512x4 gen() 18522 MB/s Jan 14 13:04:56.552263 kernel: raid6: avx512x2 gen() 18581 MB/s Jan 14 13:04:56.571260 kernel: raid6: avx512x1 gen() 18600 MB/s Jan 14 13:04:56.591263 kernel: raid6: avx2x4 gen() 18465 MB/s Jan 14 13:04:56.610259 kernel: raid6: avx2x2 gen() 18502 MB/s Jan 14 13:04:56.630868 kernel: raid6: avx2x1 gen() 13870 MB/s Jan 14 13:04:56.630931 kernel: raid6: using algorithm avx512x1 gen() 18600 MB/s Jan 14 13:04:56.653090 kernel: raid6: .... xor() 26625 MB/s, rmw enabled Jan 14 13:04:56.653152 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:04:56.677273 kernel: xor: automatically using best checksumming function avx Jan 14 13:04:56.818275 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:04:56.828155 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:04:56.837551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:04:56.851776 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 14 13:04:56.856214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:04:56.871424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:04:56.885304 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 14 13:04:56.914528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:04:56.929410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:04:56.969069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:04:56.985468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:04:57.006704 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:04:57.014821 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:04:57.025019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:04:57.032451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:04:57.045485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:04:57.064383 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:04:57.080959 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:04:57.089667 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:04:57.089693 kernel: AES CTR mode by8 optimization enabled Jan 14 13:04:57.098273 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:04:57.108372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:04:57.108603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:57.112209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:57.126052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:57.129186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.136112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.154371 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:04:57.154430 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jan 14 13:04:57.154450 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:04:57.154467 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:04:57.152021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.783445 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 13:04:57.783475 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:04:57.783635 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:04:57.783648 kernel: scsi host1: storvsc_host_t Jan 14 13:04:57.783779 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:04:57.783794 kernel: scsi host0: storvsc_host_t Jan 14 13:04:57.783911 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 13:04:57.783927 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:04:57.784044 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:04:57.784160 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:04:57.784174 kernel: PTP clock support registered Jan 14 13:04:57.784185 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:04:57.784198 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:04:57.784214 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:04:57.784229 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:04:57.784242 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:04:57.739293 systemd-resolved[210]: Clock change detected. Flushing caches. Jan 14 13:04:57.793030 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 14 13:04:57.799436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:04:57.799460 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:04:57.803063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:57.803169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.819935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:57.833764 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:04:57.848711 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:04:57.848862 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 14 13:04:57.848978 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:04:57.849094 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:04:57.849222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:57.849237 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 14 13:04:57.848577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:57.866296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:57.892950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:57.910789 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: VF slot 1 added Jan 14 13:04:57.917717 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:04:57.923706 kernel: hv_pci 51bddd3c-eb2a-46e7-bb62-c89a827f5def: PCI VMBus probing: Using version 0x10004 Jan 14 13:04:57.972977 kernel: hv_pci 51bddd3c-eb2a-46e7-bb62-c89a827f5def: PCI host bridge to bus eb2a:00 Jan 14 13:04:57.973153 kernel: pci_bus eb2a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:04:57.973339 kernel: pci_bus eb2a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:04:57.973497 kernel: pci eb2a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:04:57.973678 kernel: pci eb2a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:04:57.973886 kernel: pci eb2a:00:02.0: enabling Extended Tags Jan 14 13:04:57.974059 kernel: pci eb2a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at eb2a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:04:57.974217 kernel: pci_bus eb2a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:04:57.974366 kernel: pci eb2a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:04:58.135399 kernel: mlx5_core eb2a:00:02.0: enabling device (0000 -> 0002) Jan 14 13:04:58.365027 kernel: mlx5_core eb2a:00:02.0: firmware version: 14.30.5000 Jan 14 13:04:58.365241 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: VF registering: eth1 Jan 14 13:04:58.365405 kernel: mlx5_core eb2a:00:02.0 eth1: joined to eth0 Jan 14 13:04:58.365586 kernel: mlx5_core eb2a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:04:58.337097 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:04:58.373773 kernel: mlx5_core eb2a:00:02.0 enP60202s1: renamed from eth1 Jan 14 13:04:58.441717 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (444) Jan 14 13:04:58.459806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:04:58.471712 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (448) Jan 14 13:04:58.482175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:04:58.506631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:04:58.513762 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:04:58.527920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:04:58.544199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:58.550714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.558661 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.559384 disk-uuid[603]: The operation has completed successfully. Jan 14 13:04:59.649334 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:04:59.649459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:04:59.663865 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:04:59.672601 sh[689]: Success Jan 14 13:04:59.702724 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:04:59.935345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:04:59.949858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:04:59.955437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:04:59.973631 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 14 13:04:59.973722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:04:59.977308 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:04:59.980497 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:04:59.983272 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:05:00.324599 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:05:00.328152 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:05:00.335956 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:05:00.341842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:05:00.365240 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:00.365307 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:00.367856 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:00.414716 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:00.424263 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:05:00.431803 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:00.432611 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:00.443896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:00.449292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:05:00.468018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:05:00.470448 systemd-networkd[871]: lo: Link UP Jan 14 13:05:00.470453 systemd-networkd[871]: lo: Gained carrier Jan 14 13:05:00.472968 systemd-networkd[871]: Enumeration completed Jan 14 13:05:00.473339 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:00.476109 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:00.476112 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:00.477426 systemd[1]: Reached target network.target - Network. Jan 14 13:05:00.550720 kernel: mlx5_core eb2a:00:02.0 enP60202s1: Link up Jan 14 13:05:00.581485 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: Data path switched to VF: enP60202s1 Jan 14 13:05:00.581029 systemd-networkd[871]: enP60202s1: Link UP Jan 14 13:05:00.581178 systemd-networkd[871]: eth0: Link UP Jan 14 13:05:00.581378 systemd-networkd[871]: eth0: Gained carrier Jan 14 13:05:00.581394 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:00.593760 systemd-networkd[871]: enP60202s1: Gained carrier Jan 14 13:05:00.622756 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:01.537393 ignition[873]: Ignition 2.20.0 Jan 14 13:05:01.537406 ignition[873]: Stage: fetch-offline Jan 14 13:05:01.537451 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.537461 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.537568 ignition[873]: parsed url from cmdline: "" Jan 14 13:05:01.537572 ignition[873]: no config URL provided Jan 14 13:05:01.537579 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:01.537589 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:01.537596 ignition[873]: failed to fetch config: resource requires networking Jan 14 13:05:01.542372 ignition[873]: Ignition finished successfully Jan 14 13:05:01.559435 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:01.570893 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:05:01.584385 ignition[883]: Ignition 2.20.0 Jan 14 13:05:01.584396 ignition[883]: Stage: fetch Jan 14 13:05:01.584605 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.584618 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.584749 ignition[883]: parsed url from cmdline: "" Jan 14 13:05:01.584753 ignition[883]: no config URL provided Jan 14 13:05:01.584758 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:01.584766 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:01.584793 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:05:01.679232 ignition[883]: GET result: OK Jan 14 13:05:01.679376 ignition[883]: config has been read from IMDS userdata Jan 14 13:05:01.680940 ignition[883]: parsing config with SHA512: 43a201e4375d15f8d2efdd3db9f1472f1d4bb8e99a5ba5f970a41906d32be78e4f58ccf2344587088a35d51d76eebf9b72ebf4be85564fe6006ace81bb8a9a2d Jan 14 13:05:01.686060 unknown[883]: fetched base config from "system" Jan 14 13:05:01.686073 unknown[883]: fetched base config from "system" Jan 14 13:05:01.686500 ignition[883]: fetch: fetch complete Jan 14 13:05:01.686080 unknown[883]: fetched user config from "azure" Jan 14 13:05:01.686506 ignition[883]: fetch: fetch passed Jan 14 13:05:01.688186 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:05:01.686551 ignition[883]: Ignition finished successfully Jan 14 13:05:01.702870 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:05:01.718190 ignition[889]: Ignition 2.20.0 Jan 14 13:05:01.718203 ignition[889]: Stage: kargs Jan 14 13:05:01.720436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:05:01.718420 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.718434 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.719319 ignition[889]: kargs: kargs passed Jan 14 13:05:01.719370 ignition[889]: Ignition finished successfully Jan 14 13:05:01.735249 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:05:01.758767 ignition[895]: Ignition 2.20.0 Jan 14 13:05:01.758780 ignition[895]: Stage: disks Jan 14 13:05:01.761032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:05:01.759027 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:01.765774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:01.759042 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:01.771660 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:05:01.760012 ignition[895]: disks: disks passed Jan 14 13:05:01.777467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:01.760059 ignition[895]: Ignition finished successfully Jan 14 13:05:01.778047 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:01.788379 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:01.806255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:05:01.879845 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:05:01.885510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:05:01.896841 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:05:01.992718 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 14 13:05:01.993444 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:05:01.996765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:02.039831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:02.045439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:05:02.058730 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Jan 14 13:05:02.058785 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.062835 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:05:02.077085 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.077114 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:02.077128 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:02.070405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:05:02.070442 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:02.092396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:02.095026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:05:02.105866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:05:02.462883 systemd-networkd[871]: eth0: Gained IPv6LL Jan 14 13:05:02.590945 systemd-networkd[871]: enP60202s1: Gained IPv6LL Jan 14 13:05:02.778314 coreos-metadata[916]: Jan 14 13:05:02.778 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:02.785018 coreos-metadata[916]: Jan 14 13:05:02.784 INFO Fetch successful Jan 14 13:05:02.787902 coreos-metadata[916]: Jan 14 13:05:02.785 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:02.800741 coreos-metadata[916]: Jan 14 13:05:02.800 INFO Fetch successful Jan 14 13:05:02.814431 coreos-metadata[916]: Jan 14 13:05:02.814 INFO wrote hostname ci-4186.1.0-a-f264a924af to /sysroot/etc/hostname Jan 14 13:05:02.819806 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:05:02.822041 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:02.933143 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:05:02.938573 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:05:02.958043 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:05:03.710399 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:03.721813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:05:03.729879 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:05:03.736641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:05:03.742819 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:03.765026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:05:03.773861 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:05:03.776177 ignition[1033]: INFO : Stage: mount Jan 14 13:05:03.776177 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.776177 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.786720 ignition[1033]: INFO : mount: mount passed Jan 14 13:05:03.786720 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:05:03.777475 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:05:03.794122 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:05:03.802838 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:03.833383 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 14 13:05:03.833456 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:03.838310 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:03.840911 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:03.887960 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:03.889851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:03.912902 ignition[1062]: INFO : Ignition 2.20.0 Jan 14 13:05:03.915335 ignition[1062]: INFO : Stage: files Jan 14 13:05:03.915335 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.915335 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.923499 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:05:03.930479 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:05:03.934497 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:05:04.024540 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:05:04.029961 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:05:04.034723 unknown[1062]: wrote ssh authorized keys file for user: core Jan 14 13:05:04.037645 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:05:04.077978 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:04.084183 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:04.123593 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:05:04.272236 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:04.278240 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:04.278240 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:04.287846 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:04.292859 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:05:04.801536 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 13:05:05.301320 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:05:05.301320 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 13:05:05.351485 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:05.360144 ignition[1062]: INFO : files: files passed Jan 14 13:05:05.360144 ignition[1062]: INFO : Ignition finished successfully Jan 14 13:05:05.353831 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:05:05.378631 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:05:05.397919 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:05:05.410561 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:05:05.410721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:05:05.438676 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.438676 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.455038 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:05.442827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:05.447717 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:05:05.466533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:05:05.503515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:05:05.503657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:05:05.510262 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:05:05.516515 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:05:05.527065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:05:05.532994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:05:05.547052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:05.557894 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:05:05.571249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:05.572422 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:05.572875 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:05:05.573329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:05:05.573450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:05.574248 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:05:05.574826 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:05:05.575214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:05:05.575680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:05.576141 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:05.576593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:05:05.577070 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:05:05.577660 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:05:05.578220 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:05:05.578671 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:05:05.579547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:05:05.579685 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:05:05.580605 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:05.581108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:05.581527 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:05:05.703488 ignition[1114]: INFO : Ignition 2.20.0 Jan 14 13:05:05.703488 ignition[1114]: INFO : Stage: umount Jan 14 13:05:05.703488 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:05.703488 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:05.703488 ignition[1114]: INFO : umount: umount passed Jan 14 13:05:05.703488 ignition[1114]: INFO : Ignition finished successfully Jan 14 13:05:05.583475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:05.628755 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:05:05.634753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:05:05.656710 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:05:05.665964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:05.667178 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:05:05.667291 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:05:05.667565 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:05:05.667651 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:05.685655 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:05:05.722585 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:05:05.755436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:05:05.755663 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:05.759152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:05:05.759301 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:05:05.775882 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:05:05.780677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:05:05.787666 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:05:05.789086 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:05:05.789351 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:05:05.798281 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:05:05.798353 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:05:05.803866 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:05:05.806336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:05:05.816569 systemd[1]: Stopped target network.target - Network. Jan 14 13:05:05.819055 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:05:05.819123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:05.827740 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:05:05.837257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:05:05.843764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:05.851190 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:05:05.853868 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:05:05.861435 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:05:05.861500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:05:05.868724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:05:05.868787 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:05:05.876534 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:05:05.879185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:05:05.884482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:05:05.884566 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:05:05.892997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:05:05.893303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:05:05.902141 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:05:05.902238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:05:05.913877 systemd-networkd[871]: eth0: DHCPv6 lease lost Jan 14 13:05:05.915054 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:05:05.915171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:05:05.922265 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:05:05.922416 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:05:05.936045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:05:05.936132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:05.951821 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:05:05.955366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:05:05.955435 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:05.968059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:05:05.968146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:05.976242 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:05:05.976316 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:05.987666 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:05:05.987764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:05.994182 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:06.016444 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:05:06.016607 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:06.023027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:05:06.023072 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:06.027456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:05:06.027504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:06.027867 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:05:06.027909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:05:06.034747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:05:06.034799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:05:06.055713 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: Data path switched from VF: enP60202s1 Jan 14 13:05:06.065057 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:05:06.065152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:05:06.079170 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:05:06.082460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:05:06.082549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:06.089457 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:05:06.092585 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:06.096391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:05:06.099948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:06.106835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:05:06.111354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:06.128457 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:05:06.128599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:05:06.134177 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:05:06.134276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:05:06.286875 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:05:06.287034 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:05:06.291248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:05:06.291316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:05:06.291370 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:06.308976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:05:06.677235 systemd[1]: Switching root. Jan 14 13:05:06.760086 systemd-journald[177]: Journal stopped Jan 14 13:05:11.358602 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:05:11.358633 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:05:11.358645 kernel: SELinux: policy capability open_perms=1 Jan 14 13:05:11.358653 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:05:11.358664 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:05:11.358673 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:05:11.358683 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:05:11.358726 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:05:11.358735 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:05:11.358744 kernel: audit: type=1403 audit(1736859908.143:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:05:11.358753 systemd[1]: Successfully loaded SELinux policy in 148.270ms. Jan 14 13:05:11.358764 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.140ms. Jan 14 13:05:11.358776 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:05:11.358786 systemd[1]: Detected virtualization microsoft. Jan 14 13:05:11.358799 systemd[1]: Detected architecture x86-64. Jan 14 13:05:11.358809 systemd[1]: Detected first boot. Jan 14 13:05:11.358822 systemd[1]: Hostname set to <ci-4186.1.0-a-f264a924af>. Jan 14 13:05:11.358831 systemd[1]: Initializing machine ID from random generator. Jan 14 13:05:11.358841 zram_generator::config[1158]: No configuration found. Jan 14 13:05:11.358854 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:05:11.358864 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:05:11.358873 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:05:11.358887 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:05:11.358899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:05:11.358912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:05:11.358928 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:05:11.358941 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:05:11.358955 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:05:11.358968 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:05:11.358979 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:05:11.358989 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:05:11.359001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:11.359011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:11.359025 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:05:11.359042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:05:11.359054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:05:11.359069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:05:11.359083 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:05:11.359097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:11.359112 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:05:11.359132 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:05:11.359147 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:11.359166 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:05:11.359182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:11.359198 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:05:11.359213 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:05:11.359228 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:05:11.359244 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:05:11.359259 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:05:11.359275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:11.359293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:11.359310 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:11.359325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:05:11.359344 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:05:11.359362 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:05:11.359374 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:05:11.359385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:11.359395 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:05:11.359406 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:05:11.359416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:05:11.359427 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:05:11.359438 systemd[1]: Reached target machines.target - Containers. Jan 14 13:05:11.359454 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:05:11.359470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:11.359483 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:05:11.359497 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:05:11.359511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:11.359525 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:05:11.359540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:11.359554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:05:11.359569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:11.359588 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:05:11.359603 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:05:11.359618 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:05:11.359634 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:05:11.359653 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:05:11.359670 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:05:11.359699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:05:11.359729 kernel: fuse: init (API version 7.39) Jan 14 13:05:11.359751 kernel: loop: module loaded Jan 14 13:05:11.359766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:05:11.359781 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:05:11.359795 kernel: ACPI: bus type drm_connector registered Jan 14 13:05:11.359806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:05:11.359817 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:05:11.359827 systemd[1]: Stopped verity-setup.service. Jan 14 13:05:11.359842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:11.359853 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:05:11.359868 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:05:11.359911 systemd-journald[1257]: Collecting audit messages is disabled. Jan 14 13:05:11.359938 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:05:11.359952 systemd-journald[1257]: Journal started Jan 14 13:05:11.359980 systemd-journald[1257]: Runtime Journal (/run/log/journal/416c06d59044412fa3789c48af1171f9) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:05:10.579318 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:05:10.694912 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:05:10.695302 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:05:11.372349 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:05:11.373497 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:05:11.377363 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:05:11.380977 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:05:11.384331 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:05:11.388606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:11.393235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:05:11.393582 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:05:11.397915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:11.398243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:11.402402 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:05:11.402833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:05:11.406713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:11.407052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:11.411250 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:05:11.411589 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:05:11.415408 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:11.415817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:11.419719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:11.423723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:05:11.427899 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:05:11.431817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:11.445112 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:05:11.452840 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:05:11.458064 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:05:11.461202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:05:11.461253 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:11.465622 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:05:11.476927 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:05:11.481590 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:05:11.484872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:11.530913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:05:11.535561 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:05:11.539244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:05:11.548946 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:05:11.552625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:05:11.556917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:05:11.569911 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:05:11.578899 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:05:11.591009 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:05:11.593926 systemd-journald[1257]: Time spent on flushing to /var/log/journal/416c06d59044412fa3789c48af1171f9 is 30.242ms for 959 entries. Jan 14 13:05:11.593926 systemd-journald[1257]: System Journal (/var/log/journal/416c06d59044412fa3789c48af1171f9) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:05:11.662973 systemd-journald[1257]: Received client request to flush runtime journal. Jan 14 13:05:11.602337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:05:11.611997 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:05:11.618794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:05:11.625568 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:05:11.637624 udevadm[1298]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 13:05:11.638765 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:05:11.651360 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:05:11.662149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:11.675542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:05:11.698726 kernel: loop0: detected capacity change from 0 to 28304 Jan 14 13:05:11.770156 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 13:05:11.770184 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 13:05:11.778661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:05:11.779426 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:11.783714 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:05:11.798012 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:05:11.985817 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:05:11.993969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:05:12.019779 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 14 13:05:12.019806 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 14 13:05:12.024679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:12.083867 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:05:12.148726 kernel: loop1: detected capacity change from 0 to 141000 Jan 14 13:05:12.644721 kernel: loop2: detected capacity change from 0 to 211296 Jan 14 13:05:12.691721 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:05:12.945528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:05:12.958027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:12.983074 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jan 14 13:05:13.173981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:13.184432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:13.256915 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:05:13.264809 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:05:13.321493 kernel: loop4: detected capacity change from 0 to 28304 Jan 14 13:05:13.340471 kernel: loop5: detected capacity change from 0 to 141000 Jan 14 13:05:13.358766 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:05:13.369727 kernel: loop6: detected capacity change from 0 to 211296 Jan 14 13:05:13.388715 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:05:13.393718 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:05:13.415998 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:05:13.418628 (sd-merge)[1348]: Merged extensions into '/usr'. Jan 14 13:05:13.431170 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:05:13.431192 systemd[1]: Reloading... Jan 14 13:05:13.435712 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:05:13.480980 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:05:13.481086 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:05:13.485719 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:05:13.490925 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:05:13.499658 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:05:13.506979 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:05:13.649722 zram_generator::config[1400]: No configuration found. Jan 14 13:05:13.734232 systemd-networkd[1327]: lo: Link UP Jan 14 13:05:13.736433 systemd-networkd[1327]: lo: Gained carrier Jan 14 13:05:13.743929 systemd-networkd[1327]: Enumeration completed Jan 14 13:05:13.744538 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:13.744636 systemd-networkd[1327]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:13.868120 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1332) Jan 14 13:05:13.903346 kernel: mlx5_core eb2a:00:02.0 enP60202s1: Link up Jan 14 13:05:13.941769 kernel: hv_netvsc 000d3ab8-7a4b-000d-3ab8-7a4b000d3ab8 eth0: Data path switched to VF: enP60202s1 Jan 14 13:05:13.948548 systemd-networkd[1327]: enP60202s1: Link UP Jan 14 13:05:13.950217 systemd-networkd[1327]: eth0: Link UP Jan 14 13:05:13.950850 systemd-networkd[1327]: eth0: Gained carrier Jan 14 13:05:13.950954 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:13.958132 systemd-networkd[1327]: enP60202s1: Gained carrier Jan 14 13:05:13.993806 systemd-networkd[1327]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:14.029718 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:05:14.085150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:14.195445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:05:14.199721 systemd[1]: Reloading finished in 766 ms. Jan 14 13:05:14.231196 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:14.234988 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:05:14.279192 systemd[1]: Starting ensure-sysext.service... Jan 14 13:05:14.284480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:05:14.289929 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:05:14.297843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:05:14.305953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:05:14.321434 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:05:14.338782 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:05:14.339269 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:05:14.340571 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:05:14.343396 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:05:14.343855 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 14 13:05:14.345730 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 14 13:05:14.347243 systemd[1]: Reloading requested from client PID 1515 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:05:14.347263 systemd[1]: Reloading... Jan 14 13:05:14.367147 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:05:14.367317 systemd-tmpfiles[1518]: Skipping /boot Jan 14 13:05:14.383462 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:05:14.383634 systemd-tmpfiles[1518]: Skipping /boot Jan 14 13:05:14.438042 lvm[1522]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:05:14.472718 zram_generator::config[1556]: No configuration found. Jan 14 13:05:14.602683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:14.700183 systemd[1]: Reloading finished in 352 ms. Jan 14 13:05:14.726213 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:05:14.731264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:14.735259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:14.738774 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:05:14.749910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:14.759006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:05:14.765020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:05:14.772924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:05:14.784835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:05:14.798052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:05:14.803047 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:05:14.803856 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:05:14.814190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.814463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:14.819050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:14.826020 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:14.842107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:14.845260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:14.845459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.857019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:05:14.861672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:14.862034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:14.866244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:14.866379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:14.870896 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:14.871083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:14.878134 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:05:14.889503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.889950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:14.890218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:14.890363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:05:14.890524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:05:14.890637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.896897 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:05:14.905439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.906197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:14.917070 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:14.922926 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:05:14.935915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:14.950050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:14.955336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:14.955443 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:05:14.958379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:14.959057 systemd[1]: Finished ensure-sysext.service. Jan 14 13:05:14.962034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:14.962821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:14.970289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:05:14.970469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:05:14.973782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:14.973957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:14.977821 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:14.978001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:14.985295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:05:14.985367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:05:15.005797 systemd-resolved[1631]: Positive Trust Anchors: Jan 14 13:05:15.005820 systemd-resolved[1631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:05:15.005865 systemd-resolved[1631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:05:15.006877 systemd-networkd[1327]: enP60202s1: Gained IPv6LL Jan 14 13:05:15.028747 augenrules[1665]: No rules Jan 14 13:05:15.030174 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:05:15.030412 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:05:15.038006 systemd-resolved[1631]: Using system hostname 'ci-4186.1.0-a-f264a924af'. Jan 14 13:05:15.039867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:05:15.044160 systemd[1]: Reached target network.target - Network. Jan 14 13:05:15.046710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:15.468356 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:05:15.472574 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:05:15.583020 systemd-networkd[1327]: eth0: Gained IPv6LL Jan 14 13:05:15.586536 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:05:15.590887 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:05:17.818634 ldconfig[1290]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:05:17.831560 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:05:17.840970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:05:17.855850 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:05:17.859574 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:17.863074 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:05:17.866861 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:05:17.870731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:05:17.874024 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:05:17.877369 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:05:17.881237 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:05:17.881289 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:05:17.883906 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:05:17.928388 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:05:17.933650 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:05:17.943004 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:05:17.946975 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:05:17.950204 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:05:17.953105 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:17.955828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:05:17.955863 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:05:17.962861 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:05:17.967856 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:05:17.973941 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:05:17.988973 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:05:17.999886 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:05:18.013057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:05:18.016797 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:05:18.017014 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:05:18.019840 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:05:18.027129 jq[1682]: false Jan 14 13:05:18.022530 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:05:18.025603 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:05:18.032892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:18.039941 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:05:18.046955 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:05:18.059559 KVP[1687]: KVP starting; pid is:1687 Jan 14 13:05:18.060924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:05:18.073937 chronyd[1696]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:05:18.075928 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:05:18.082942 KVP[1687]: KVP LIC Version: 3.1 Jan 14 13:05:18.083712 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:05:18.090447 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:05:18.106035 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:05:18.111616 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:05:18.112381 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:05:18.123942 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:05:18.131850 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:05:18.137771 chronyd[1696]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:05:18.138089 chronyd[1696]: Loaded seccomp filter (level 2) Jan 14 13:05:18.146368 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:05:18.146914 extend-filesystems[1686]: Found loop4 Jan 14 13:05:18.146914 extend-filesystems[1686]: Found loop5 Jan 14 13:05:18.146914 extend-filesystems[1686]: Found loop6 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found loop7 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda1 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda2 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda3 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found usr Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda4 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda6 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda7 Jan 14 13:05:18.163925 extend-filesystems[1686]: Found sda9 Jan 14 13:05:18.163925 extend-filesystems[1686]: Checking size of /dev/sda9 Jan 14 13:05:18.151252 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:05:18.180907 dbus-daemon[1681]: [system] SELinux support is enabled Jan 14 13:05:18.151917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:05:18.180680 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:05:18.181945 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:05:18.186245 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:05:18.212484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:05:18.212730 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:05:18.240771 extend-filesystems[1686]: Old size kept for /dev/sda9 Jan 14 13:05:18.240771 extend-filesystems[1686]: Found sr0 Jan 14 13:05:18.240514 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:05:18.244149 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:05:18.265726 jq[1706]: true Jan 14 13:05:18.278614 update_engine[1705]: I20250114 13:05:18.277992 1705 main.cc:92] Flatcar Update Engine starting Jan 14 13:05:18.282294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:05:18.284032 update_engine[1705]: I20250114 13:05:18.283871 1705 update_check_scheduler.cc:74] Next update check in 8m12s Jan 14 13:05:18.293820 (ntainerd)[1722]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:05:18.299407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:05:18.299468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:05:18.301206 systemd-logind[1704]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 13:05:18.309131 systemd-logind[1704]: New seat seat0. Jan 14 13:05:18.310152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:05:18.310198 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:05:18.313202 coreos-metadata[1680]: Jan 14 13:05:18.313 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:18.319656 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:05:18.325038 coreos-metadata[1680]: Jan 14 13:05:18.324 INFO Fetch successful Jan 14 13:05:18.325186 coreos-metadata[1680]: Jan 14 13:05:18.325 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:05:18.329205 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:05:18.336678 coreos-metadata[1680]: Jan 14 13:05:18.336 INFO Fetch successful Jan 14 13:05:18.340799 coreos-metadata[1680]: Jan 14 13:05:18.340 INFO Fetching http://168.63.129.16/machine/4758f00f-d0b1-4ed3-9dfa-d86e96230554/8ed93cce%2De60e%2D4ede%2Dbb78%2D0395310fabd1.%5Fci%2D4186.1.0%2Da%2Df264a924af?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:05:18.343970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:05:18.349720 jq[1723]: true Jan 14 13:05:18.356891 coreos-metadata[1680]: Jan 14 13:05:18.356 INFO Fetch successful Jan 14 13:05:18.357195 coreos-metadata[1680]: Jan 14 13:05:18.357 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:18.371132 tar[1716]: linux-amd64/helm Jan 14 13:05:18.375728 coreos-metadata[1680]: Jan 14 13:05:18.375 INFO Fetch successful Jan 14 13:05:18.455729 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1738) Jan 14 13:05:18.487656 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:05:18.503856 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:05:18.628738 bash[1801]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:05:18.635244 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:05:18.641844 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:05:18.721550 locksmithd[1743]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:05:18.929438 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:05:18.966219 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:05:18.978833 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:05:18.992147 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:05:19.020132 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:05:19.020395 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:05:19.035212 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:05:19.075956 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:05:19.091761 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:05:19.105201 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:05:19.110285 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:05:19.124953 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:05:19.251012 tar[1716]: linux-amd64/LICENSE Jan 14 13:05:19.251012 tar[1716]: linux-amd64/README.md Jan 14 13:05:19.270108 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:05:19.791010 containerd[1722]: time="2025-01-14T13:05:19.790309000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:05:19.816910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:19.830276 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:19.833475 containerd[1722]: time="2025-01-14T13:05:19.833420900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835047 containerd[1722]: time="2025-01-14T13:05:19.835001100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835047 containerd[1722]: time="2025-01-14T13:05:19.835038900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:05:19.835197 containerd[1722]: time="2025-01-14T13:05:19.835061700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:05:19.835275 containerd[1722]: time="2025-01-14T13:05:19.835251800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:05:19.835316 containerd[1722]: time="2025-01-14T13:05:19.835276400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835390 containerd[1722]: time="2025-01-14T13:05:19.835367600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835390 containerd[1722]: time="2025-01-14T13:05:19.835386800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835612 containerd[1722]: time="2025-01-14T13:05:19.835586500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835612 containerd[1722]: time="2025-01-14T13:05:19.835609200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835705 containerd[1722]: time="2025-01-14T13:05:19.835627300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835705 containerd[1722]: time="2025-01-14T13:05:19.835640700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.835832 containerd[1722]: time="2025-01-14T13:05:19.835775100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.836250 containerd[1722]: time="2025-01-14T13:05:19.836130500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:19.836412 containerd[1722]: time="2025-01-14T13:05:19.836385500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:19.836412 containerd[1722]: time="2025-01-14T13:05:19.836408100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:05:19.836773 containerd[1722]: time="2025-01-14T13:05:19.836531300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:05:19.836773 containerd[1722]: time="2025-01-14T13:05:19.836595500Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:05:19.856170 containerd[1722]: time="2025-01-14T13:05:19.856118500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:05:19.856477 containerd[1722]: time="2025-01-14T13:05:19.856346500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:05:19.856477 containerd[1722]: time="2025-01-14T13:05:19.856396200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:05:19.856477 containerd[1722]: time="2025-01-14T13:05:19.856420300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:05:19.856663 containerd[1722]: time="2025-01-14T13:05:19.856550200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:05:19.856823 containerd[1722]: time="2025-01-14T13:05:19.856793700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:05:19.857136 containerd[1722]: time="2025-01-14T13:05:19.857108600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:05:19.857273 containerd[1722]: time="2025-01-14T13:05:19.857250800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:05:19.857336 containerd[1722]: time="2025-01-14T13:05:19.857276300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:05:19.857336 containerd[1722]: time="2025-01-14T13:05:19.857296600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:05:19.857336 containerd[1722]: time="2025-01-14T13:05:19.857316700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857335800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857353700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857374200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857393900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857411800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857447 containerd[1722]: time="2025-01-14T13:05:19.857433100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857451800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857505100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857527700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857550500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857569800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857587000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857604600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.857631 containerd[1722]: time="2025-01-14T13:05:19.857622300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857640600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857658500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857679200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857717700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857736200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857752900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857773600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857806200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857834300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857852000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857905600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857932300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:05:19.858022 containerd[1722]: time="2025-01-14T13:05:19.857948000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:05:19.858506 containerd[1722]: time="2025-01-14T13:05:19.857964500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:05:19.858506 containerd[1722]: time="2025-01-14T13:05:19.857979300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858506 containerd[1722]: time="2025-01-14T13:05:19.857996800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:05:19.858506 containerd[1722]: time="2025-01-14T13:05:19.858011400Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:05:19.858506 containerd[1722]: time="2025-01-14T13:05:19.858027100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:05:19.858726 containerd[1722]: time="2025-01-14T13:05:19.858420100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:05:19.858726 containerd[1722]: time="2025-01-14T13:05:19.858486700Z" level=info msg="Connect containerd service" Jan 14 13:05:19.858726 containerd[1722]: time="2025-01-14T13:05:19.858546600Z" level=info msg="using legacy CRI server" Jan 14 13:05:19.858726 containerd[1722]: time="2025-01-14T13:05:19.858558700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:05:19.859097 containerd[1722]: time="2025-01-14T13:05:19.858738200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:05:19.859558 containerd[1722]: time="2025-01-14T13:05:19.859526900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:05:19.859964 containerd[1722]: time="2025-01-14T13:05:19.859662700Z" level=info msg="Start subscribing containerd event" Jan 14 13:05:19.860206 containerd[1722]: time="2025-01-14T13:05:19.860105000Z" level=info msg="Start recovering state" Jan 14 13:05:19.860431 containerd[1722]: time="2025-01-14T13:05:19.860319800Z" level=info msg="Start event monitor" Jan 14 13:05:19.860431 containerd[1722]: time="2025-01-14T13:05:19.860356000Z" level=info msg="Start snapshots syncer" Jan 14 13:05:19.860431 containerd[1722]: time="2025-01-14T13:05:19.860369400Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:05:19.860431 containerd[1722]: time="2025-01-14T13:05:19.860385500Z" level=info msg="Start streaming server" Jan 14 13:05:19.860589 containerd[1722]: time="2025-01-14T13:05:19.860497700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:05:19.860589 containerd[1722]: time="2025-01-14T13:05:19.860560400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:05:19.860908 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:05:19.864956 containerd[1722]: time="2025-01-14T13:05:19.864918200Z" level=info msg="containerd successfully booted in 0.076973s" Jan 14 13:05:19.865184 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:05:19.868547 systemd[1]: Startup finished in 989ms (firmware) + 30.230s (loader) + 1.055s (kernel) + 11.737s (initrd) + 11.871s (userspace) = 55.884s. Jan 14 13:05:19.905154 agetty[1849]: failed to open credentials directory Jan 14 13:05:19.905155 agetty[1848]: failed to open credentials directory Jan 14 13:05:20.156235 login[1849]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 14 13:05:20.156851 login[1848]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:05:20.169334 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:05:20.178883 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:05:20.183407 systemd-logind[1704]: New session 2 of user core. Jan 14 13:05:20.197769 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:05:20.203060 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:05:20.229134 (systemd)[1879]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:05:20.433406 systemd[1879]: Queued start job for default target default.target. Jan 14 13:05:20.444139 systemd[1879]: Created slice app.slice - User Application Slice. Jan 14 13:05:20.444180 systemd[1879]: Reached target paths.target - Paths. Jan 14 13:05:20.444316 systemd[1879]: Reached target timers.target - Timers. Jan 14 13:05:20.446591 systemd[1879]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:05:20.473846 systemd[1879]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:05:20.474009 systemd[1879]: Reached target sockets.target - Sockets. Jan 14 13:05:20.474031 systemd[1879]: Reached target basic.target - Basic System. Jan 14 13:05:20.474092 systemd[1879]: Reached target default.target - Main User Target. Jan 14 13:05:20.474133 systemd[1879]: Startup finished in 234ms. Jan 14 13:05:20.475065 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:05:20.480444 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:05:20.722013 kubelet[1866]: E0114 13:05:20.721855 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:20.724799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:20.724991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:21.158806 login[1849]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:05:21.165122 systemd-logind[1704]: New session 1 of user core. Jan 14 13:05:21.170892 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:05:21.235822 waagent[1850]: 2025-01-14T13:05:21.235670Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.237644Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.238611Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.239769Z INFO Daemon Daemon Run daemon Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.240632Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.241467Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.242744Z INFO Daemon Daemon Activate resource disk Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.243074Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.248399Z INFO Daemon Daemon Found device: None Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.249360Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.250340Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.251931Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:05:21.276073 waagent[1850]: 2025-01-14T13:05:21.252974Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:05:21.281363 waagent[1850]: 2025-01-14T13:05:21.279515Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:05:21.284030 waagent[1850]: 2025-01-14T13:05:21.281817Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:05:21.284030 waagent[1850]: 2025-01-14T13:05:21.283026Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:05:21.285220 waagent[1850]: 2025-01-14T13:05:21.285170Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:05:21.402164 waagent[1850]: 2025-01-14T13:05:21.402054Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:05:21.417652 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:05:21.420406 waagent[1850]: 2025-01-14T13:05:21.420317Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:05:21.423342 waagent[1850]: 2025-01-14T13:05:21.423269Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:05:21.436724 waagent[1850]: 2025-01-14T13:05:21.424657Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:05:21.436724 waagent[1850]: 2025-01-14T13:05:21.425556Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:05:21.436724 waagent[1850]: 2025-01-14T13:05:21.426643Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:05:21.436724 waagent[1850]: 2025-01-14T13:05:21.427514Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:05:21.453522 waagent[1850]: 2025-01-14T13:05:21.453449Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:05:21.462395 waagent[1850]: 2025-01-14T13:05:21.455008Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:05:21.462395 waagent[1850]: 2025-01-14T13:05:21.455420Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:05:21.627938 waagent[1850]: 2025-01-14T13:05:21.627828Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:05:21.631605 waagent[1850]: 2025-01-14T13:05:21.631517Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:05:21.638420 waagent[1850]: 2025-01-14T13:05:21.638352Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:05:21.656496 waagent[1850]: 2025-01-14T13:05:21.656424Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.658242Z INFO Daemon Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.660176Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d6aa705a-fa51-43cb-a342-618d79ecb8c2 eTag: 8014193420895173312 source: Fabric] Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.661796Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.662750Z INFO Daemon Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.663615Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:05:21.675730 waagent[1850]: 2025-01-14T13:05:21.669319Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:05:21.750079 waagent[1850]: 2025-01-14T13:05:21.749983Z INFO Daemon Downloaded certificate {'thumbprint': 'FB5D1FF01793D7297430147F71FA609F4E74EAEA', 'hasPrivateKey': True} Jan 14 13:05:21.757526 waagent[1850]: 2025-01-14T13:05:21.751993Z INFO Daemon Fetch goal state completed Jan 14 13:05:21.761914 waagent[1850]: 2025-01-14T13:05:21.761858Z INFO Daemon Daemon Starting provisioning Jan 14 13:05:21.769371 waagent[1850]: 2025-01-14T13:05:21.763317Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:05:21.769371 waagent[1850]: 2025-01-14T13:05:21.764204Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-f264a924af] Jan 14 13:05:21.786670 waagent[1850]: 2025-01-14T13:05:21.786572Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-f264a924af] Jan 14 13:05:21.794589 waagent[1850]: 2025-01-14T13:05:21.788091Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:05:21.794589 waagent[1850]: 2025-01-14T13:05:21.789002Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:05:21.814739 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:21.814751 systemd-networkd[1327]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:21.814803 systemd-networkd[1327]: eth0: DHCP lease lost Jan 14 13:05:21.816242 waagent[1850]: 2025-01-14T13:05:21.816145Z INFO Daemon Daemon Create user account if not exists Jan 14 13:05:21.823079 waagent[1850]: 2025-01-14T13:05:21.817409Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:05:21.823079 waagent[1850]: 2025-01-14T13:05:21.817806Z INFO Daemon Daemon Configure sudoer Jan 14 13:05:21.823079 waagent[1850]: 2025-01-14T13:05:21.818622Z INFO Daemon Daemon Configure sshd Jan 14 13:05:21.823079 waagent[1850]: 2025-01-14T13:05:21.819472Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:05:21.823079 waagent[1850]: 2025-01-14T13:05:21.820026Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:05:21.823431 systemd-networkd[1327]: eth0: DHCPv6 lease lost Jan 14 13:05:21.871768 systemd-networkd[1327]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:22.970502 waagent[1850]: 2025-01-14T13:05:22.970390Z INFO Daemon Daemon Provisioning complete Jan 14 13:05:22.986442 waagent[1850]: 2025-01-14T13:05:22.986360Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:05:22.990533 waagent[1850]: 2025-01-14T13:05:22.987623Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:05:22.990533 waagent[1850]: 2025-01-14T13:05:22.988517Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:05:23.122584 waagent[1931]: 2025-01-14T13:05:23.122449Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:05:23.123066 waagent[1931]: 2025-01-14T13:05:23.122637Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 14 13:05:23.123066 waagent[1931]: 2025-01-14T13:05:23.122742Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:05:23.161129 waagent[1931]: 2025-01-14T13:05:23.161021Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:05:23.161374 waagent[1931]: 2025-01-14T13:05:23.161316Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:23.161470 waagent[1931]: 2025-01-14T13:05:23.161426Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:23.169233 waagent[1931]: 2025-01-14T13:05:23.169139Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:05:23.175543 waagent[1931]: 2025-01-14T13:05:23.175481Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 14 13:05:23.176130 waagent[1931]: 2025-01-14T13:05:23.176063Z INFO ExtHandler Jan 14 13:05:23.176223 waagent[1931]: 2025-01-14T13:05:23.176170Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0b0c75d4-2b2e-4bbd-8caf-b7fdf7c53825 eTag: 8014193420895173312 source: Fabric] Jan 14 13:05:23.176530 waagent[1931]: 2025-01-14T13:05:23.176478Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:05:23.177139 waagent[1931]: 2025-01-14T13:05:23.177081Z INFO ExtHandler Jan 14 13:05:23.177204 waagent[1931]: 2025-01-14T13:05:23.177167Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:05:23.180896 waagent[1931]: 2025-01-14T13:05:23.180851Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:05:23.245009 waagent[1931]: 2025-01-14T13:05:23.244849Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FB5D1FF01793D7297430147F71FA609F4E74EAEA', 'hasPrivateKey': True} Jan 14 13:05:23.245525 waagent[1931]: 2025-01-14T13:05:23.245462Z INFO ExtHandler Fetch goal state completed Jan 14 13:05:23.262071 waagent[1931]: 2025-01-14T13:05:23.261988Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1931 Jan 14 13:05:23.262243 waagent[1931]: 2025-01-14T13:05:23.262192Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:05:23.264356 waagent[1931]: 2025-01-14T13:05:23.264277Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:05:23.265336 waagent[1931]: 2025-01-14T13:05:23.264880Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:05:23.889032 waagent[1931]: 2025-01-14T13:05:23.888969Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:05:23.889304 waagent[1931]: 2025-01-14T13:05:23.889246Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:05:23.897643 waagent[1931]: 2025-01-14T13:05:23.897592Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:05:23.905881 systemd[1]: Reloading requested from client PID 1944 ('systemctl') (unit waagent.service)... Jan 14 13:05:23.905900 systemd[1]: Reloading... Jan 14 13:05:24.012735 zram_generator::config[1987]: No configuration found. Jan 14 13:05:24.141746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:24.226902 systemd[1]: Reloading finished in 320 ms. Jan 14 13:05:24.254808 waagent[1931]: 2025-01-14T13:05:24.253803Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:05:24.263349 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit waagent.service)... Jan 14 13:05:24.263366 systemd[1]: Reloading... Jan 14 13:05:24.337757 zram_generator::config[2065]: No configuration found. Jan 14 13:05:24.478859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:24.559711 systemd[1]: Reloading finished in 295 ms. Jan 14 13:05:24.589559 waagent[1931]: 2025-01-14T13:05:24.588457Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:05:24.589559 waagent[1931]: 2025-01-14T13:05:24.588680Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:05:24.895811 waagent[1931]: 2025-01-14T13:05:24.890738Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:05:24.896353 waagent[1931]: 2025-01-14T13:05:24.896263Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:05:24.897336 waagent[1931]: 2025-01-14T13:05:24.897233Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:05:24.897511 waagent[1931]: 2025-01-14T13:05:24.897432Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:24.897930 waagent[1931]: 2025-01-14T13:05:24.897858Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:24.898170 waagent[1931]: 2025-01-14T13:05:24.898114Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:05:24.898403 waagent[1931]: 2025-01-14T13:05:24.898342Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:24.898758 waagent[1931]: 2025-01-14T13:05:24.898655Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:05:24.898864 waagent[1931]: 2025-01-14T13:05:24.898756Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:05:24.899185 waagent[1931]: 2025-01-14T13:05:24.899126Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:05:24.899406 waagent[1931]: 2025-01-14T13:05:24.899353Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:24.900051 waagent[1931]: 2025-01-14T13:05:24.899983Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:05:24.900197 waagent[1931]: 2025-01-14T13:05:24.900137Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:05:24.900518 waagent[1931]: 2025-01-14T13:05:24.900456Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:05:24.901239 waagent[1931]: 2025-01-14T13:05:24.901065Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:05:24.901239 waagent[1931]: 2025-01-14T13:05:24.901131Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:05:24.901239 waagent[1931]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:05:24.901239 waagent[1931]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:05:24.901239 waagent[1931]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:05:24.901239 waagent[1931]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:24.901239 waagent[1931]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:24.901239 waagent[1931]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:24.901239 waagent[1931]: 2025-01-14T13:05:24.901215Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:05:24.902259 waagent[1931]: 2025-01-14T13:05:24.902215Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:05:24.912767 waagent[1931]: 2025-01-14T13:05:24.912669Z INFO ExtHandler ExtHandler Jan 14 13:05:24.912920 waagent[1931]: 2025-01-14T13:05:24.912813Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c67aa9cb-e8c1-4e6f-90d9-ed5503c0c432 correlation be29bf99-0983-44a9-a553-bf321826f5a7 created: 2025-01-14T13:04:11.477348Z] Jan 14 13:05:24.913286 waagent[1931]: 2025-01-14T13:05:24.913224Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:05:24.913821 waagent[1931]: 2025-01-14T13:05:24.913769Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:05:24.962571 waagent[1931]: 2025-01-14T13:05:24.962489Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4DB41914-F993-43A0-9D34-1943C3F51665;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:05:25.003372 waagent[1931]: 2025-01-14T13:05:25.003277Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:05:25.003372 waagent[1931]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:05:25.003372 waagent[1931]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:05:25.003372 waagent[1931]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:7a:4b brd ff:ff:ff:ff:ff:ff Jan 14 13:05:25.003372 waagent[1931]: 3: enP60202s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:7a:4b brd ff:ff:ff:ff:ff:ff\ altname enP60202p0s2 Jan 14 13:05:25.003372 waagent[1931]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:05:25.003372 waagent[1931]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:05:25.003372 waagent[1931]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:05:25.003372 waagent[1931]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:05:25.003372 waagent[1931]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:05:25.003372 waagent[1931]: 2: eth0 inet6 fe80::20d:3aff:feb8:7a4b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:05:25.003372 waagent[1931]: 3: enP60202s1 inet6 fe80::20d:3aff:feb8:7a4b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:05:25.052634 waagent[1931]: 2025-01-14T13:05:25.052555Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:05:25.052634 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.052634 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.052634 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.052634 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.052634 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.052634 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.052634 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:05:25.052634 waagent[1931]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:05:25.052634 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:05:25.056262 waagent[1931]: 2025-01-14T13:05:25.056192Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:05:25.056262 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.056262 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.056262 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.056262 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.056262 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:25.056262 waagent[1931]: pkts bytes target prot opt in out source destination Jan 14 13:05:25.056262 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:05:25.056262 waagent[1931]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:05:25.056262 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:05:25.056656 waagent[1931]: 2025-01-14T13:05:25.056560Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:05:30.910383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:05:30.918998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:31.037579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:31.047179 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:31.623622 kubelet[2165]: E0114 13:05:31.623553 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:31.628039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:31.628257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:41.660495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:05:41.667048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:41.817583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:41.829192 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:41.927621 chronyd[1696]: Selected source PHC0 Jan 14 13:05:42.347040 kubelet[2181]: E0114 13:05:42.346904 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:42.350312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:42.350529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:48.659209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:05:48.664033 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:39200.service - OpenSSH per-connection server daemon (10.200.16.10:39200). Jan 14 13:05:49.410450 sshd[2191]: Accepted publickey for core from 10.200.16.10 port 39200 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:49.412233 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:49.417093 systemd-logind[1704]: New session 3 of user core. Jan 14 13:05:49.424871 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:05:49.974096 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:39208.service - OpenSSH per-connection server daemon (10.200.16.10:39208). Jan 14 13:05:50.611869 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 39208 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:50.613607 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:50.618772 systemd-logind[1704]: New session 4 of user core. Jan 14 13:05:50.624908 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:05:51.069795 sshd[2198]: Connection closed by 10.200.16.10 port 39208 Jan 14 13:05:51.070650 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Jan 14 13:05:51.076207 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:39208.service: Deactivated successfully. Jan 14 13:05:51.079179 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:05:51.081091 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:05:51.082091 systemd-logind[1704]: Removed session 4. Jan 14 13:05:51.192090 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:39216.service - OpenSSH per-connection server daemon (10.200.16.10:39216). Jan 14 13:05:51.834223 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 39216 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:51.836002 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:51.841752 systemd-logind[1704]: New session 5 of user core. Jan 14 13:05:51.848941 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:05:52.288278 sshd[2205]: Connection closed by 10.200.16.10 port 39216 Jan 14 13:05:52.289230 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Jan 14 13:05:52.292469 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:39216.service: Deactivated successfully. Jan 14 13:05:52.294513 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:05:52.296082 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:05:52.297148 systemd-logind[1704]: Removed session 5. Jan 14 13:05:52.400944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:05:52.406002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:52.410516 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:39230.service - OpenSSH per-connection server daemon (10.200.16.10:39230). Jan 14 13:05:52.764362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:52.780156 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:53.058625 kubelet[2220]: E0114 13:05:53.058452 2220 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:53.061536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:53.061742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:53.063327 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 39230 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:53.064223 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:53.069007 systemd-logind[1704]: New session 6 of user core. Jan 14 13:05:53.071872 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:05:53.517173 sshd[2228]: Connection closed by 10.200.16.10 port 39230 Jan 14 13:05:53.518083 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jan 14 13:05:53.522333 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:39230.service: Deactivated successfully. Jan 14 13:05:53.524586 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:05:53.525605 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:05:53.526594 systemd-logind[1704]: Removed session 6. Jan 14 13:05:53.642033 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:39234.service - OpenSSH per-connection server daemon (10.200.16.10:39234). Jan 14 13:05:54.308271 sshd[2233]: Accepted publickey for core from 10.200.16.10 port 39234 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:54.310040 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:54.315156 systemd-logind[1704]: New session 7 of user core. Jan 14 13:05:54.316881 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:05:54.788522 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:05:54.789022 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:05:54.819321 sudo[2236]: pam_unix(sudo:session): session closed for user root Jan 14 13:05:54.924290 sshd[2235]: Connection closed by 10.200.16.10 port 39234 Jan 14 13:05:54.925396 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Jan 14 13:05:54.928622 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:39234.service: Deactivated successfully. Jan 14 13:05:54.930785 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:05:54.932455 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:05:54.933618 systemd-logind[1704]: Removed session 7. Jan 14 13:05:55.041081 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:39246.service - OpenSSH per-connection server daemon (10.200.16.10:39246). Jan 14 13:05:55.682888 sshd[2241]: Accepted publickey for core from 10.200.16.10 port 39246 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:55.684645 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:55.690424 systemd-logind[1704]: New session 8 of user core. Jan 14 13:05:55.699910 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:05:56.033986 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:05:56.034369 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:05:56.038233 sudo[2245]: pam_unix(sudo:session): session closed for user root Jan 14 13:05:56.043836 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:05:56.044213 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:05:56.060084 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:05:56.088098 augenrules[2267]: No rules Jan 14 13:05:56.089597 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:05:56.089849 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:05:56.091187 sudo[2244]: pam_unix(sudo:session): session closed for user root Jan 14 13:05:56.201535 sshd[2243]: Connection closed by 10.200.16.10 port 39246 Jan 14 13:05:56.202440 sshd-session[2241]: pam_unix(sshd:session): session closed for user core Jan 14 13:05:56.205563 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:39246.service: Deactivated successfully. Jan 14 13:05:56.207505 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:05:56.208996 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:05:56.209988 systemd-logind[1704]: Removed session 8. Jan 14 13:05:56.318930 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:46184.service - OpenSSH per-connection server daemon (10.200.16.10:46184). Jan 14 13:05:56.961134 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 46184 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:05:56.962884 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:05:56.968586 systemd-logind[1704]: New session 9 of user core. Jan 14 13:05:56.974912 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:05:57.313480 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:05:57.313956 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:05:59.029404 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:05:59.031267 (dockerd)[2297]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:06:00.276510 dockerd[2297]: time="2025-01-14T13:06:00.276440390Z" level=info msg="Starting up" Jan 14 13:06:00.831586 dockerd[2297]: time="2025-01-14T13:06:00.831527082Z" level=info msg="Loading containers: start." Jan 14 13:06:01.039766 kernel: Initializing XFRM netlink socket Jan 14 13:06:01.137312 systemd-networkd[1327]: docker0: Link UP Jan 14 13:06:01.184299 dockerd[2297]: time="2025-01-14T13:06:01.184248150Z" level=info msg="Loading containers: done." Jan 14 13:06:01.253348 dockerd[2297]: time="2025-01-14T13:06:01.253288318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:06:01.253569 dockerd[2297]: time="2025-01-14T13:06:01.253433021Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 14 13:06:01.253646 dockerd[2297]: time="2025-01-14T13:06:01.253627024Z" level=info msg="Daemon has completed initialization" Jan 14 13:06:01.315153 dockerd[2297]: time="2025-01-14T13:06:01.314591056Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:06:01.315172 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:06:01.575719 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:06:03.142929 containerd[1722]: time="2025-01-14T13:06:03.142889156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 14 13:06:03.160300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:06:03.168485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:03.206829 update_engine[1705]: I20250114 13:06:03.206749 1705 update_attempter.cc:509] Updating boot flags... Jan 14 13:06:03.401730 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2503) Jan 14 13:06:03.423965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:03.433115 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:03.891111 kubelet[2518]: E0114 13:06:03.890972 2518 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:03.893868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:03.894074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:03.927751 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2507) Jan 14 13:06:04.070720 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2507) Jan 14 13:06:04.503728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48322786.mount: Deactivated successfully. Jan 14 13:06:06.948131 containerd[1722]: time="2025-01-14T13:06:06.948054459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:06.953275 containerd[1722]: time="2025-01-14T13:06:06.953201015Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 14 13:06:06.961261 containerd[1722]: time="2025-01-14T13:06:06.961172602Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:06.971656 containerd[1722]: time="2025-01-14T13:06:06.971581515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:06.973224 containerd[1722]: time="2025-01-14T13:06:06.972713928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.82977747s" Jan 14 13:06:06.973224 containerd[1722]: time="2025-01-14T13:06:06.972762428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 14 13:06:06.996446 containerd[1722]: time="2025-01-14T13:06:06.996404786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 14 13:06:09.085712 containerd[1722]: time="2025-01-14T13:06:09.085645866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:09.090087 containerd[1722]: time="2025-01-14T13:06:09.090003814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 14 13:06:09.094475 containerd[1722]: time="2025-01-14T13:06:09.094388062Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:09.105672 containerd[1722]: time="2025-01-14T13:06:09.105586284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:09.107143 containerd[1722]: time="2025-01-14T13:06:09.106588895Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.110140909s" Jan 14 13:06:09.107143 containerd[1722]: time="2025-01-14T13:06:09.106682396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 14 13:06:09.131034 containerd[1722]: time="2025-01-14T13:06:09.130995461Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 14 13:06:10.492030 containerd[1722]: time="2025-01-14T13:06:10.491970100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:10.496186 containerd[1722]: time="2025-01-14T13:06:10.496112346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 14 13:06:10.501488 containerd[1722]: time="2025-01-14T13:06:10.501417603Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:10.509041 containerd[1722]: time="2025-01-14T13:06:10.508965986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:10.510119 containerd[1722]: time="2025-01-14T13:06:10.509949296Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.378914135s" Jan 14 13:06:10.510119 containerd[1722]: time="2025-01-14T13:06:10.509995697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 14 13:06:10.532239 containerd[1722]: time="2025-01-14T13:06:10.532195039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 13:06:11.809209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241648528.mount: Deactivated successfully. Jan 14 13:06:12.300971 containerd[1722]: time="2025-01-14T13:06:12.300908524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:12.305243 containerd[1722]: time="2025-01-14T13:06:12.305171471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 13:06:12.311827 containerd[1722]: time="2025-01-14T13:06:12.311752343Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:12.318399 containerd[1722]: time="2025-01-14T13:06:12.318332614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:12.319146 containerd[1722]: time="2025-01-14T13:06:12.318949821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.786707982s" Jan 14 13:06:12.319146 containerd[1722]: time="2025-01-14T13:06:12.319022822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 13:06:12.342221 containerd[1722]: time="2025-01-14T13:06:12.342183675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:06:12.913286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730927073.mount: Deactivated successfully. Jan 14 13:06:13.911594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:06:13.921718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:14.081850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:14.084112 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:14.151355 kubelet[2806]: E0114 13:06:14.150828 2806 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:14.154338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:14.154523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:14.569966 containerd[1722]: time="2025-01-14T13:06:14.569905834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:14.574111 containerd[1722]: time="2025-01-14T13:06:14.574026279Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 13:06:14.578779 containerd[1722]: time="2025-01-14T13:06:14.578708930Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:14.587210 containerd[1722]: time="2025-01-14T13:06:14.587133421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:14.588364 containerd[1722]: time="2025-01-14T13:06:14.588179833Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.245958857s" Jan 14 13:06:14.588364 containerd[1722]: time="2025-01-14T13:06:14.588219633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 13:06:14.611176 containerd[1722]: time="2025-01-14T13:06:14.611136682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:06:15.209889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26426910.mount: Deactivated successfully. Jan 14 13:06:15.246915 containerd[1722]: time="2025-01-14T13:06:15.246849681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:15.250773 containerd[1722]: time="2025-01-14T13:06:15.250679722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 13:06:15.255425 containerd[1722]: time="2025-01-14T13:06:15.255360473Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:15.260909 containerd[1722]: time="2025-01-14T13:06:15.260846233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:15.261751 containerd[1722]: time="2025-01-14T13:06:15.261560340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 650.384558ms" Jan 14 13:06:15.261751 containerd[1722]: time="2025-01-14T13:06:15.261600341Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 13:06:15.283277 containerd[1722]: time="2025-01-14T13:06:15.283197375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 14 13:06:16.059249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122528390.mount: Deactivated successfully. Jan 14 13:06:18.509052 containerd[1722]: time="2025-01-14T13:06:18.508992382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:18.513100 containerd[1722]: time="2025-01-14T13:06:18.513044126Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 14 13:06:18.522659 containerd[1722]: time="2025-01-14T13:06:18.522613630Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:18.529275 containerd[1722]: time="2025-01-14T13:06:18.528930199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:18.530147 containerd[1722]: time="2025-01-14T13:06:18.530111912Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.246870336s" Jan 14 13:06:18.530245 containerd[1722]: time="2025-01-14T13:06:18.530151912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 14 13:06:22.243203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:22.248978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:22.278901 systemd[1]: Reloading requested from client PID 2936 ('systemctl') (unit session-9.scope)... Jan 14 13:06:22.278917 systemd[1]: Reloading... Jan 14 13:06:22.396720 zram_generator::config[2982]: No configuration found. Jan 14 13:06:22.528818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:06:22.616817 systemd[1]: Reloading finished in 337 ms. Jan 14 13:06:22.675876 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:06:22.676048 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:06:22.676359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:22.681103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:22.867765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:22.880066 (kubelet)[3047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:06:23.508721 kubelet[3047]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:23.508721 kubelet[3047]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:06:23.508721 kubelet[3047]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:23.508721 kubelet[3047]: I0114 13:06:23.507116 3047 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:06:24.068273 kubelet[3047]: I0114 13:06:24.068234 3047 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:06:24.068273 kubelet[3047]: I0114 13:06:24.068264 3047 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:06:24.068532 kubelet[3047]: I0114 13:06:24.068525 3047 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:06:24.085517 kubelet[3047]: E0114 13:06:24.085465 3047 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.086253 kubelet[3047]: I0114 13:06:24.086224 3047 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:06:24.095134 kubelet[3047]: I0114 13:06:24.095101 3047 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:06:24.096435 kubelet[3047]: I0114 13:06:24.096401 3047 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:06:24.096646 kubelet[3047]: I0114 13:06:24.096621 3047 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:06:24.097160 kubelet[3047]: I0114 13:06:24.097128 3047 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:06:24.097160 kubelet[3047]: I0114 13:06:24.097158 3047 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:06:24.097318 kubelet[3047]: I0114 13:06:24.097298 3047 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:24.097463 kubelet[3047]: I0114 13:06:24.097444 3047 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:06:24.097523 kubelet[3047]: I0114 13:06:24.097466 3047 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:06:24.097523 kubelet[3047]: I0114 13:06:24.097499 3047 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:06:24.097523 kubelet[3047]: I0114 13:06:24.097518 3047 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:06:24.099348 kubelet[3047]: W0114 13:06:24.098951 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.099348 kubelet[3047]: E0114 13:06:24.099007 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.099348 kubelet[3047]: W0114 13:06:24.099285 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.099348 kubelet[3047]: E0114 13:06:24.099326 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.100230 kubelet[3047]: I0114 13:06:24.099963 3047 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:06:24.105307 kubelet[3047]: I0114 13:06:24.103821 3047 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:06:24.105307 kubelet[3047]: W0114 13:06:24.103892 3047 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:06:24.105307 kubelet[3047]: I0114 13:06:24.104888 3047 server.go:1256] "Started kubelet" Jan 14 13:06:24.105681 kubelet[3047]: I0114 13:06:24.105661 3047 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:06:24.107424 kubelet[3047]: I0114 13:06:24.107243 3047 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:06:24.111498 kubelet[3047]: I0114 13:06:24.111470 3047 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:06:24.112295 kubelet[3047]: I0114 13:06:24.112275 3047 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:06:24.112622 kubelet[3047]: I0114 13:06:24.112603 3047 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:06:24.117416 kubelet[3047]: I0114 13:06:24.117382 3047 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:06:24.118835 kubelet[3047]: E0114 13:06:24.118564 3047 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-f264a924af.181a90f40929130a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-f264a924af,UID:ci-4186.1.0-a-f264a924af,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-f264a924af,},FirstTimestamp:2025-01-14 13:06:24.104854282 +0000 UTC m=+1.220712860,LastTimestamp:2025-01-14 13:06:24.104854282 +0000 UTC m=+1.220712860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-f264a924af,}" Jan 14 13:06:24.120559 kubelet[3047]: I0114 13:06:24.120536 3047 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:06:24.120832 kubelet[3047]: I0114 13:06:24.120818 3047 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:06:24.122106 kubelet[3047]: E0114 13:06:24.122084 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Jan 14 13:06:24.123227 kubelet[3047]: W0114 13:06:24.123177 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.123371 kubelet[3047]: E0114 13:06:24.123357 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.123643 kubelet[3047]: I0114 13:06:24.123622 3047 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:06:24.123861 kubelet[3047]: I0114 13:06:24.123841 3047 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:06:24.125584 kubelet[3047]: I0114 13:06:24.125563 3047 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:06:24.142720 kubelet[3047]: E0114 13:06:24.141987 3047 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:06:24.151340 kubelet[3047]: I0114 13:06:24.151307 3047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:06:24.153787 kubelet[3047]: I0114 13:06:24.153089 3047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:06:24.153787 kubelet[3047]: I0114 13:06:24.153128 3047 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:06:24.153787 kubelet[3047]: I0114 13:06:24.153156 3047 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:06:24.153787 kubelet[3047]: E0114 13:06:24.153220 3047 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:06:24.155181 kubelet[3047]: W0114 13:06:24.155126 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.155357 kubelet[3047]: E0114 13:06:24.155343 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:24.175080 kubelet[3047]: I0114 13:06:24.175053 3047 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:06:24.175231 kubelet[3047]: I0114 13:06:24.175119 3047 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:06:24.175231 kubelet[3047]: I0114 13:06:24.175140 3047 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:24.184481 kubelet[3047]: I0114 13:06:24.184364 3047 policy_none.go:49] "None policy: Start" Jan 14 13:06:24.185164 kubelet[3047]: I0114 13:06:24.185107 3047 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:06:24.185164 kubelet[3047]: I0114 13:06:24.185146 3047 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:06:24.196288 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:06:24.208330 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:06:24.212294 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:06:24.219329 kubelet[3047]: I0114 13:06:24.218550 3047 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:06:24.219329 kubelet[3047]: I0114 13:06:24.218856 3047 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:06:24.220279 kubelet[3047]: I0114 13:06:24.220258 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.221795 kubelet[3047]: E0114 13:06:24.221614 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.222188 kubelet[3047]: E0114 13:06:24.222172 3047 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-f264a924af\" not found" Jan 14 13:06:24.253777 kubelet[3047]: I0114 13:06:24.253716 3047 topology_manager.go:215] "Topology Admit Handler" podUID="55d9ee8be0305663743cea831f059744" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.255883 kubelet[3047]: I0114 13:06:24.255842 3047 topology_manager.go:215] "Topology Admit Handler" podUID="ab7cce4e87353c6292f5e3f2a009e8b0" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.257809 kubelet[3047]: I0114 13:06:24.257496 3047 topology_manager.go:215] "Topology Admit Handler" podUID="58dbe791b059620e8ced485a77f8c78e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.264535 systemd[1]: Created slice kubepods-burstable-pod55d9ee8be0305663743cea831f059744.slice - libcontainer container kubepods-burstable-pod55d9ee8be0305663743cea831f059744.slice. Jan 14 13:06:24.276367 systemd[1]: Created slice kubepods-burstable-podab7cce4e87353c6292f5e3f2a009e8b0.slice - libcontainer container kubepods-burstable-podab7cce4e87353c6292f5e3f2a009e8b0.slice. Jan 14 13:06:24.280374 systemd[1]: Created slice kubepods-burstable-pod58dbe791b059620e8ced485a77f8c78e.slice - libcontainer container kubepods-burstable-pod58dbe791b059620e8ced485a77f8c78e.slice. Jan 14 13:06:24.323048 kubelet[3047]: I0114 13:06:24.322486 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323048 kubelet[3047]: I0114 13:06:24.322548 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323048 kubelet[3047]: I0114 13:06:24.322581 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323048 kubelet[3047]: I0114 13:06:24.322613 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab7cce4e87353c6292f5e3f2a009e8b0-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-f264a924af\" (UID: \"ab7cce4e87353c6292f5e3f2a009e8b0\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323048 kubelet[3047]: I0114 13:06:24.322649 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323393 kubelet[3047]: I0114 13:06:24.322682 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323393 kubelet[3047]: I0114 13:06:24.322790 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323393 kubelet[3047]: E0114 13:06:24.322796 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Jan 14 13:06:24.323393 kubelet[3047]: I0114 13:06:24.322825 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.323393 kubelet[3047]: I0114 13:06:24.322886 3047 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.424616 kubelet[3047]: I0114 13:06:24.424572 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.425054 kubelet[3047]: E0114 13:06:24.425028 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.573849 containerd[1722]: time="2025-01-14T13:06:24.573676297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-f264a924af,Uid:55d9ee8be0305663743cea831f059744,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:24.580392 containerd[1722]: time="2025-01-14T13:06:24.580355569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-f264a924af,Uid:ab7cce4e87353c6292f5e3f2a009e8b0,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:24.583304 containerd[1722]: time="2025-01-14T13:06:24.583265500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-f264a924af,Uid:58dbe791b059620e8ced485a77f8c78e,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:24.723805 kubelet[3047]: E0114 13:06:24.723758 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Jan 14 13:06:24.827162 kubelet[3047]: I0114 13:06:24.827011 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:24.827511 kubelet[3047]: E0114 13:06:24.827471 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:25.161567 kubelet[3047]: W0114 13:06:25.161513 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.161723 kubelet[3047]: E0114 13:06:25.161591 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.484329 kubelet[3047]: W0114 13:06:25.484167 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.484329 kubelet[3047]: E0114 13:06:25.484222 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.525161 kubelet[3047]: E0114 13:06:25.525117 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Jan 14 13:06:25.583961 kubelet[3047]: W0114 13:06:25.583914 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.583961 kubelet[3047]: E0114 13:06:25.583964 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.603727 kubelet[3047]: W0114 13:06:25.603637 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.603727 kubelet[3047]: E0114 13:06:25.603734 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:25.629988 kubelet[3047]: I0114 13:06:25.629944 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:25.630382 kubelet[3047]: E0114 13:06:25.630350 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:26.262837 kubelet[3047]: E0114 13:06:26.262794 3047 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:26.844572 kubelet[3047]: W0114 13:06:26.844525 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:26.844572 kubelet[3047]: E0114 13:06:26.844571 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.087236 kubelet[3047]: W0114 13:06:27.087190 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.087236 kubelet[3047]: E0114 13:06:27.087236 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.125758 kubelet[3047]: E0114 13:06:27.125578 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="3.2s" Jan 14 13:06:27.232360 kubelet[3047]: I0114 13:06:27.232327 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:27.232724 kubelet[3047]: E0114 13:06:27.232702 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:27.336818 kubelet[3047]: W0114 13:06:27.336771 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.336818 kubelet[3047]: E0114 13:06:27.336820 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.472117 kubelet[3047]: W0114 13:06:27.472066 3047 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:27.472117 kubelet[3047]: E0114 13:06:27.472121 3047 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f264a924af&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jan 14 13:06:28.982927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924729813.mount: Deactivated successfully. Jan 14 13:06:29.029310 containerd[1722]: time="2025-01-14T13:06:29.029241360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:29.052573 containerd[1722]: time="2025-01-14T13:06:29.052500709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:06:29.060418 containerd[1722]: time="2025-01-14T13:06:29.060355993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:29.070505 containerd[1722]: time="2025-01-14T13:06:29.070447401Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:29.083568 containerd[1722]: time="2025-01-14T13:06:29.083213738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:06:29.089177 containerd[1722]: time="2025-01-14T13:06:29.089129701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:29.096213 containerd[1722]: time="2025-01-14T13:06:29.096172676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:29.096972 containerd[1722]: time="2025-01-14T13:06:29.096941985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.523114786s" Jan 14 13:06:29.099388 containerd[1722]: time="2025-01-14T13:06:29.099303510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:06:29.104371 containerd[1722]: time="2025-01-14T13:06:29.104332464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.523885695s" Jan 14 13:06:29.116833 containerd[1722]: time="2025-01-14T13:06:29.116793397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.533434396s" Jan 14 13:06:29.823015 kubelet[3047]: E0114 13:06:29.822976 3047 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-f264a924af.181a90f40929130a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-f264a924af,UID:ci-4186.1.0-a-f264a924af,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-f264a924af,},FirstTimestamp:2025-01-14 13:06:24.104854282 +0000 UTC m=+1.220712860,LastTimestamp:2025-01-14 13:06:24.104854282 +0000 UTC m=+1.220712860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-f264a924af,}" Jan 14 13:06:29.978586 containerd[1722]: time="2025-01-14T13:06:29.975080878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:29.978586 containerd[1722]: time="2025-01-14T13:06:29.978330013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:29.978586 containerd[1722]: time="2025-01-14T13:06:29.978363014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:29.978586 containerd[1722]: time="2025-01-14T13:06:29.978487015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:29.987293 containerd[1722]: time="2025-01-14T13:06:29.985521890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:29.987293 containerd[1722]: time="2025-01-14T13:06:29.986025696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:29.987293 containerd[1722]: time="2025-01-14T13:06:29.986110696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:29.987293 containerd[1722]: time="2025-01-14T13:06:29.986288298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:29.988506 containerd[1722]: time="2025-01-14T13:06:29.988232519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:29.988506 containerd[1722]: time="2025-01-14T13:06:29.988301720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:29.988506 containerd[1722]: time="2025-01-14T13:06:29.988321320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:29.989946 containerd[1722]: time="2025-01-14T13:06:29.988428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:30.034915 systemd[1]: Started cri-containerd-f5a558e58cf9c3454ff6c69478182c652e0c79834b52fbc2e94faa6156244312.scope - libcontainer container f5a558e58cf9c3454ff6c69478182c652e0c79834b52fbc2e94faa6156244312. Jan 14 13:06:30.046676 systemd[1]: Started cri-containerd-5f9618c6e8aabe2689d2a84805ff89de40f0d435304793838aef1562a6cb9032.scope - libcontainer container 5f9618c6e8aabe2689d2a84805ff89de40f0d435304793838aef1562a6cb9032. Jan 14 13:06:30.049745 systemd[1]: Started cri-containerd-96d792dba43c84c1240207631225d7457e8068cfc582f7c19741dc4c93cfb6d3.scope - libcontainer container 96d792dba43c84c1240207631225d7457e8068cfc582f7c19741dc4c93cfb6d3. Jan 14 13:06:30.120278 containerd[1722]: time="2025-01-14T13:06:30.119363957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-f264a924af,Uid:58dbe791b059620e8ced485a77f8c78e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5a558e58cf9c3454ff6c69478182c652e0c79834b52fbc2e94faa6156244312\"" Jan 14 13:06:30.135091 containerd[1722]: time="2025-01-14T13:06:30.135045529Z" level=info msg="CreateContainer within sandbox \"f5a558e58cf9c3454ff6c69478182c652e0c79834b52fbc2e94faa6156244312\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:06:30.138617 containerd[1722]: time="2025-01-14T13:06:30.138317465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-f264a924af,Uid:55d9ee8be0305663743cea831f059744,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f9618c6e8aabe2689d2a84805ff89de40f0d435304793838aef1562a6cb9032\"" Jan 14 13:06:30.141121 containerd[1722]: time="2025-01-14T13:06:30.141077795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-f264a924af,Uid:ab7cce4e87353c6292f5e3f2a009e8b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d792dba43c84c1240207631225d7457e8068cfc582f7c19741dc4c93cfb6d3\"" Jan 14 13:06:30.143886 containerd[1722]: time="2025-01-14T13:06:30.143767425Z" level=info msg="CreateContainer within sandbox \"5f9618c6e8aabe2689d2a84805ff89de40f0d435304793838aef1562a6cb9032\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:06:30.145216 containerd[1722]: time="2025-01-14T13:06:30.144883437Z" level=info msg="CreateContainer within sandbox \"96d792dba43c84c1240207631225d7457e8068cfc582f7c19741dc4c93cfb6d3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:06:30.255313 containerd[1722]: time="2025-01-14T13:06:30.255252849Z" level=info msg="CreateContainer within sandbox \"f5a558e58cf9c3454ff6c69478182c652e0c79834b52fbc2e94faa6156244312\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9079065e4e7a6f9d68d4057faaad8fde3ab116c8465c6b3e9c93d26fe188936\"" Jan 14 13:06:30.256191 containerd[1722]: time="2025-01-14T13:06:30.256135859Z" level=info msg="StartContainer for \"b9079065e4e7a6f9d68d4057faaad8fde3ab116c8465c6b3e9c93d26fe188936\"" Jan 14 13:06:30.263721 containerd[1722]: time="2025-01-14T13:06:30.263547540Z" level=info msg="CreateContainer within sandbox \"5f9618c6e8aabe2689d2a84805ff89de40f0d435304793838aef1562a6cb9032\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d9bb878cbf49d9d5b14af3b5fdc884589b1cf22332f4edf57259a349d04436f\"" Jan 14 13:06:30.264637 containerd[1722]: time="2025-01-14T13:06:30.264322449Z" level=info msg="StartContainer for \"9d9bb878cbf49d9d5b14af3b5fdc884589b1cf22332f4edf57259a349d04436f\"" Jan 14 13:06:30.267902 containerd[1722]: time="2025-01-14T13:06:30.267864588Z" level=info msg="CreateContainer within sandbox \"96d792dba43c84c1240207631225d7457e8068cfc582f7c19741dc4c93cfb6d3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d51591aa702661aea0fb19c970bb4ebe65ebfa858a1fa8d698b509ac276524aa\"" Jan 14 13:06:30.271778 containerd[1722]: time="2025-01-14T13:06:30.270886621Z" level=info msg="StartContainer for \"d51591aa702661aea0fb19c970bb4ebe65ebfa858a1fa8d698b509ac276524aa\"" Jan 14 13:06:30.297474 systemd[1]: Started cri-containerd-b9079065e4e7a6f9d68d4057faaad8fde3ab116c8465c6b3e9c93d26fe188936.scope - libcontainer container b9079065e4e7a6f9d68d4057faaad8fde3ab116c8465c6b3e9c93d26fe188936. Jan 14 13:06:30.321921 systemd[1]: Started cri-containerd-9d9bb878cbf49d9d5b14af3b5fdc884589b1cf22332f4edf57259a349d04436f.scope - libcontainer container 9d9bb878cbf49d9d5b14af3b5fdc884589b1cf22332f4edf57259a349d04436f. Jan 14 13:06:30.329471 kubelet[3047]: E0114 13:06:30.329434 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f264a924af?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="6.4s" Jan 14 13:06:30.342874 systemd[1]: Started cri-containerd-d51591aa702661aea0fb19c970bb4ebe65ebfa858a1fa8d698b509ac276524aa.scope - libcontainer container d51591aa702661aea0fb19c970bb4ebe65ebfa858a1fa8d698b509ac276524aa. Jan 14 13:06:30.411501 containerd[1722]: time="2025-01-14T13:06:30.410901159Z" level=info msg="StartContainer for \"b9079065e4e7a6f9d68d4057faaad8fde3ab116c8465c6b3e9c93d26fe188936\" returns successfully" Jan 14 13:06:30.424879 containerd[1722]: time="2025-01-14T13:06:30.423950102Z" level=info msg="StartContainer for \"9d9bb878cbf49d9d5b14af3b5fdc884589b1cf22332f4edf57259a349d04436f\" returns successfully" Jan 14 13:06:30.434111 containerd[1722]: time="2025-01-14T13:06:30.434020512Z" level=info msg="StartContainer for \"d51591aa702661aea0fb19c970bb4ebe65ebfa858a1fa8d698b509ac276524aa\" returns successfully" Jan 14 13:06:30.436737 kubelet[3047]: I0114 13:06:30.436387 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:30.437091 kubelet[3047]: E0114 13:06:30.437057 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:33.151280 kubelet[3047]: E0114 13:06:33.151231 3047 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.0-a-f264a924af" not found Jan 14 13:06:33.502651 kubelet[3047]: E0114 13:06:33.502608 3047 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.0-a-f264a924af" not found Jan 14 13:06:33.928288 kubelet[3047]: E0114 13:06:33.928166 3047 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.0-a-f264a924af" not found Jan 14 13:06:34.104772 kubelet[3047]: I0114 13:06:34.104723 3047 apiserver.go:52] "Watching apiserver" Jan 14 13:06:34.121287 kubelet[3047]: I0114 13:06:34.121138 3047 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:06:34.223009 kubelet[3047]: E0114 13:06:34.222638 3047 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-f264a924af\" not found" Jan 14 13:06:34.837746 kubelet[3047]: E0114 13:06:34.837706 3047 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.0-a-f264a924af" not found Jan 14 13:06:36.734415 kubelet[3047]: E0114 13:06:36.734363 3047 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-f264a924af\" not found" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:36.839855 kubelet[3047]: I0114 13:06:36.839803 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:36.849641 kubelet[3047]: I0114 13:06:36.849591 3047 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:37.637538 systemd[1]: Reloading requested from client PID 3319 ('systemctl') (unit session-9.scope)... Jan 14 13:06:37.637554 systemd[1]: Reloading... Jan 14 13:06:37.748846 zram_generator::config[3365]: No configuration found. Jan 14 13:06:37.848724 kubelet[3047]: W0114 13:06:37.846420 3047 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:06:37.880539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:06:37.980929 systemd[1]: Reloading finished in 342 ms. Jan 14 13:06:38.035231 kubelet[3047]: I0114 13:06:38.035121 3047 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:06:38.035454 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:38.047349 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:06:38.047616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:38.047683 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 115.3M memory peak, 0B memory swap peak. Jan 14 13:06:38.054987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:38.265194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:38.276101 (kubelet)[3426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:06:38.340373 kubelet[3426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:38.340373 kubelet[3426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:06:38.340373 kubelet[3426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:38.340900 kubelet[3426]: I0114 13:06:38.340448 3426 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:06:38.345283 kubelet[3426]: I0114 13:06:38.345244 3426 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:06:38.345283 kubelet[3426]: I0114 13:06:38.345280 3426 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:06:38.345572 kubelet[3426]: I0114 13:06:38.345550 3426 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:06:38.347190 kubelet[3426]: I0114 13:06:38.347159 3426 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:06:38.351397 kubelet[3426]: I0114 13:06:38.350982 3426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:06:38.362481 kubelet[3426]: I0114 13:06:38.362447 3426 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:06:38.362940 kubelet[3426]: I0114 13:06:38.362923 3426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:06:38.363221 kubelet[3426]: I0114 13:06:38.363182 3426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:06:38.363380 kubelet[3426]: I0114 13:06:38.363287 3426 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:06:38.363380 kubelet[3426]: I0114 13:06:38.363305 3426 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:06:38.363380 kubelet[3426]: I0114 13:06:38.363354 3426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:38.363500 kubelet[3426]: I0114 13:06:38.363490 3426 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:06:38.363539 kubelet[3426]: I0114 13:06:38.363517 3426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:06:38.363602 kubelet[3426]: I0114 13:06:38.363583 3426 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:06:38.363643 kubelet[3426]: I0114 13:06:38.363606 3426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:06:38.371092 kubelet[3426]: I0114 13:06:38.368566 3426 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:06:38.371092 kubelet[3426]: I0114 13:06:38.368841 3426 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:06:38.371092 kubelet[3426]: I0114 13:06:38.370031 3426 server.go:1256] "Started kubelet" Jan 14 13:06:38.376358 kubelet[3426]: I0114 13:06:38.376333 3426 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:06:38.380631 kubelet[3426]: I0114 13:06:38.380604 3426 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:06:38.385434 kubelet[3426]: I0114 13:06:38.384471 3426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:06:38.386568 kubelet[3426]: I0114 13:06:38.385775 3426 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:06:38.392876 kubelet[3426]: E0114 13:06:38.392849 3426 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:06:38.393684 kubelet[3426]: I0114 13:06:38.393653 3426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:06:38.408588 kubelet[3426]: I0114 13:06:38.406918 3426 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:06:38.409939 kubelet[3426]: I0114 13:06:38.409912 3426 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:06:38.410084 kubelet[3426]: I0114 13:06:38.410070 3426 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:06:38.414919 kubelet[3426]: I0114 13:06:38.414897 3426 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:06:38.415073 kubelet[3426]: I0114 13:06:38.415064 3426 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:06:38.415250 kubelet[3426]: I0114 13:06:38.415229 3426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:06:38.415883 kubelet[3426]: I0114 13:06:38.415860 3426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:06:38.418783 kubelet[3426]: I0114 13:06:38.418670 3426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:06:38.418783 kubelet[3426]: I0114 13:06:38.418756 3426 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:06:38.418783 kubelet[3426]: I0114 13:06:38.418783 3426 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:06:38.418934 kubelet[3426]: E0114 13:06:38.418847 3426 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:06:38.472101 kubelet[3426]: I0114 13:06:38.472069 3426 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:06:38.472352 kubelet[3426]: I0114 13:06:38.472331 3426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:06:38.472436 kubelet[3426]: I0114 13:06:38.472359 3426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:38.472550 kubelet[3426]: I0114 13:06:38.472534 3426 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:06:38.472601 kubelet[3426]: I0114 13:06:38.472562 3426 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:06:38.472601 kubelet[3426]: I0114 13:06:38.472572 3426 policy_none.go:49] "None policy: Start" Jan 14 13:06:38.473310 kubelet[3426]: I0114 13:06:38.473288 3426 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:06:38.473416 kubelet[3426]: I0114 13:06:38.473316 3426 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:06:38.473964 kubelet[3426]: I0114 13:06:38.473930 3426 state_mem.go:75] "Updated machine memory state" Jan 14 13:06:38.482333 kubelet[3426]: I0114 13:06:38.482044 3426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:06:38.482513 kubelet[3426]: I0114 13:06:38.482497 3426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:06:38.510492 kubelet[3426]: I0114 13:06:38.510330 3426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.519391 kubelet[3426]: I0114 13:06:38.519032 3426 topology_manager.go:215] "Topology Admit Handler" podUID="55d9ee8be0305663743cea831f059744" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.519391 kubelet[3426]: I0114 13:06:38.519251 3426 topology_manager.go:215] "Topology Admit Handler" podUID="ab7cce4e87353c6292f5e3f2a009e8b0" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.522033 kubelet[3426]: I0114 13:06:38.520426 3426 topology_manager.go:215] "Topology Admit Handler" podUID="58dbe791b059620e8ced485a77f8c78e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.524616 kubelet[3426]: I0114 13:06:38.524583 3426 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.524731 kubelet[3426]: I0114 13:06:38.524664 3426 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.532432 kubelet[3426]: W0114 13:06:38.532073 3426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:06:38.532432 kubelet[3426]: E0114 13:06:38.532164 3426 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.532432 kubelet[3426]: W0114 13:06:38.532276 3426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:06:38.532957 kubelet[3426]: W0114 13:06:38.532763 3426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:06:38.710828 kubelet[3426]: I0114 13:06:38.710761 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.710828 kubelet[3426]: I0114 13:06:38.710828 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab7cce4e87353c6292f5e3f2a009e8b0-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-f264a924af\" (UID: \"ab7cce4e87353c6292f5e3f2a009e8b0\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711060 kubelet[3426]: I0114 13:06:38.710856 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711060 kubelet[3426]: I0114 13:06:38.710880 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711060 kubelet[3426]: I0114 13:06:38.710907 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711060 kubelet[3426]: I0114 13:06:38.710937 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711060 kubelet[3426]: I0114 13:06:38.710963 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711233 kubelet[3426]: I0114 13:06:38.710991 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58dbe791b059620e8ced485a77f8c78e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-f264a924af\" (UID: \"58dbe791b059620e8ced485a77f8c78e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" Jan 14 13:06:38.711233 kubelet[3426]: I0114 13:06:38.711018 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ee8be0305663743cea831f059744-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" (UID: \"55d9ee8be0305663743cea831f059744\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:39.365749 kubelet[3426]: I0114 13:06:39.364938 3426 apiserver.go:52] "Watching apiserver" Jan 14 13:06:39.412715 kubelet[3426]: I0114 13:06:39.410985 3426 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:06:39.491675 kubelet[3426]: W0114 13:06:39.491636 3426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:06:39.491858 kubelet[3426]: E0114 13:06:39.491744 3426 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-f264a924af\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" Jan 14 13:06:39.533889 kubelet[3426]: I0114 13:06:39.533741 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-f264a924af" podStartSLOduration=1.5336770720000001 podStartE2EDuration="1.533677072s" podCreationTimestamp="2025-01-14 13:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:06:39.504160951 +0000 UTC m=+1.222951596" watchObservedRunningTime="2025-01-14 13:06:39.533677072 +0000 UTC m=+1.252467817" Jan 14 13:06:39.551026 kubelet[3426]: I0114 13:06:39.550973 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-f264a924af" podStartSLOduration=1.550259252 podStartE2EDuration="1.550259252s" podCreationTimestamp="2025-01-14 13:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:06:39.534812684 +0000 UTC m=+1.253603429" watchObservedRunningTime="2025-01-14 13:06:39.550259252 +0000 UTC m=+1.269049897" Jan 14 13:06:39.574024 kubelet[3426]: I0114 13:06:39.573982 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f264a924af" podStartSLOduration=2.573932809 podStartE2EDuration="2.573932809s" podCreationTimestamp="2025-01-14 13:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:06:39.551370464 +0000 UTC m=+1.270161209" watchObservedRunningTime="2025-01-14 13:06:39.573932809 +0000 UTC m=+1.292723454" Jan 14 13:06:43.572026 sudo[2278]: pam_unix(sudo:session): session closed for user root Jan 14 13:06:43.676006 sshd[2277]: Connection closed by 10.200.16.10 port 46184 Jan 14 13:06:43.676880 sshd-session[2275]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:43.680581 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:46184.service: Deactivated successfully. Jan 14 13:06:43.683358 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:06:43.683723 systemd[1]: session-9.scope: Consumed 5.433s CPU time, 186.7M memory peak, 0B memory swap peak. Jan 14 13:06:43.685372 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:06:43.686500 systemd-logind[1704]: Removed session 9. Jan 14 13:06:48.762177 kubelet[3426]: I0114 13:06:48.762132 3426 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:06:48.763405 containerd[1722]: time="2025-01-14T13:06:48.763081069Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:06:48.763968 kubelet[3426]: I0114 13:06:48.763892 3426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:06:49.429553 kubelet[3426]: I0114 13:06:49.427885 3426 topology_manager.go:215] "Topology Admit Handler" podUID="b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6" podNamespace="kube-system" podName="kube-proxy-zjgpd" Jan 14 13:06:49.440674 systemd[1]: Created slice kubepods-besteffort-podb3b2ffbb_2a90_4f97_b3e6_9afcb0cafce6.slice - libcontainer container kubepods-besteffort-podb3b2ffbb_2a90_4f97_b3e6_9afcb0cafce6.slice. Jan 14 13:06:49.473831 kubelet[3426]: I0114 13:06:49.473583 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6-xtables-lock\") pod \"kube-proxy-zjgpd\" (UID: \"b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6\") " pod="kube-system/kube-proxy-zjgpd" Jan 14 13:06:49.473831 kubelet[3426]: I0114 13:06:49.473643 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6-lib-modules\") pod \"kube-proxy-zjgpd\" (UID: \"b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6\") " pod="kube-system/kube-proxy-zjgpd" Jan 14 13:06:49.473831 kubelet[3426]: I0114 13:06:49.473678 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzbl\" (UniqueName: \"kubernetes.io/projected/b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6-kube-api-access-rlzbl\") pod \"kube-proxy-zjgpd\" (UID: \"b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6\") " pod="kube-system/kube-proxy-zjgpd" Jan 14 13:06:49.473831 kubelet[3426]: I0114 13:06:49.473717 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6-kube-proxy\") pod \"kube-proxy-zjgpd\" (UID: \"b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6\") " pod="kube-system/kube-proxy-zjgpd" Jan 14 13:06:49.755076 containerd[1722]: time="2025-01-14T13:06:49.754950274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjgpd,Uid:b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:49.821258 containerd[1722]: time="2025-01-14T13:06:49.820798091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:49.821258 containerd[1722]: time="2025-01-14T13:06:49.820961193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:49.821258 containerd[1722]: time="2025-01-14T13:06:49.820985193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:49.821258 containerd[1722]: time="2025-01-14T13:06:49.821109694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:49.842769 systemd[1]: run-containerd-runc-k8s.io-298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095-runc.lLCgN0.mount: Deactivated successfully. Jan 14 13:06:49.851912 systemd[1]: Started cri-containerd-298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095.scope - libcontainer container 298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095. Jan 14 13:06:49.888129 containerd[1722]: time="2025-01-14T13:06:49.888083824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjgpd,Uid:b3b2ffbb-2a90-4f97-b3e6-9afcb0cafce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095\"" Jan 14 13:06:49.892365 containerd[1722]: time="2025-01-14T13:06:49.892321170Z" level=info msg="CreateContainer within sandbox \"298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:06:49.917988 kubelet[3426]: I0114 13:06:49.917946 3426 topology_manager.go:215] "Topology Admit Handler" podUID="8e9bd377-a7c3-4603-bbae-5ac1de1e55c6" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-xkfjw" Jan 14 13:06:49.932244 systemd[1]: Created slice kubepods-besteffort-pod8e9bd377_a7c3_4603_bbae_5ac1de1e55c6.slice - libcontainer container kubepods-besteffort-pod8e9bd377_a7c3_4603_bbae_5ac1de1e55c6.slice. Jan 14 13:06:49.954448 containerd[1722]: time="2025-01-14T13:06:49.954385946Z" level=info msg="CreateContainer within sandbox \"298e394f2110ab8b784d7d24f2e4430d0c2443df2b1179356c7a6bd3820e4095\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22984f5c5ef4d41a5b7ebdb9d490375fa48765109718f18771476c62bf9d2ea9\"" Jan 14 13:06:49.955323 containerd[1722]: time="2025-01-14T13:06:49.955275656Z" level=info msg="StartContainer for \"22984f5c5ef4d41a5b7ebdb9d490375fa48765109718f18771476c62bf9d2ea9\"" Jan 14 13:06:49.981083 kubelet[3426]: I0114 13:06:49.980961 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c5l\" (UniqueName: \"kubernetes.io/projected/8e9bd377-a7c3-4603-bbae-5ac1de1e55c6-kube-api-access-65c5l\") pod \"tigera-operator-c7ccbd65-xkfjw\" (UID: \"8e9bd377-a7c3-4603-bbae-5ac1de1e55c6\") " pod="tigera-operator/tigera-operator-c7ccbd65-xkfjw" Jan 14 13:06:49.981083 kubelet[3426]: I0114 13:06:49.981024 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e9bd377-a7c3-4603-bbae-5ac1de1e55c6-var-lib-calico\") pod \"tigera-operator-c7ccbd65-xkfjw\" (UID: \"8e9bd377-a7c3-4603-bbae-5ac1de1e55c6\") " pod="tigera-operator/tigera-operator-c7ccbd65-xkfjw" Jan 14 13:06:49.983924 systemd[1]: Started cri-containerd-22984f5c5ef4d41a5b7ebdb9d490375fa48765109718f18771476c62bf9d2ea9.scope - libcontainer container 22984f5c5ef4d41a5b7ebdb9d490375fa48765109718f18771476c62bf9d2ea9. Jan 14 13:06:50.017592 containerd[1722]: time="2025-01-14T13:06:50.017461233Z" level=info msg="StartContainer for \"22984f5c5ef4d41a5b7ebdb9d490375fa48765109718f18771476c62bf9d2ea9\" returns successfully" Jan 14 13:06:50.239388 containerd[1722]: time="2025-01-14T13:06:50.238906946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-xkfjw,Uid:8e9bd377-a7c3-4603-bbae-5ac1de1e55c6,Namespace:tigera-operator,Attempt:0,}" Jan 14 13:06:50.304985 containerd[1722]: time="2025-01-14T13:06:50.303522749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:50.304985 containerd[1722]: time="2025-01-14T13:06:50.304815764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:50.304985 containerd[1722]: time="2025-01-14T13:06:50.304835564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:50.305908 containerd[1722]: time="2025-01-14T13:06:50.305232768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:50.329904 systemd[1]: Started cri-containerd-57c5f1f5b8890889a557426e88b1eea73b48679bc4fcd4a6d5570e63e3de53a5.scope - libcontainer container 57c5f1f5b8890889a557426e88b1eea73b48679bc4fcd4a6d5570e63e3de53a5. Jan 14 13:06:50.381804 containerd[1722]: time="2025-01-14T13:06:50.381745202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-xkfjw,Uid:8e9bd377-a7c3-4603-bbae-5ac1de1e55c6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"57c5f1f5b8890889a557426e88b1eea73b48679bc4fcd4a6d5570e63e3de53a5\"" Jan 14 13:06:50.384152 containerd[1722]: time="2025-01-14T13:06:50.384091027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 14 13:06:50.494986 kubelet[3426]: I0114 13:06:50.494671 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zjgpd" podStartSLOduration=1.494624431 podStartE2EDuration="1.494624431s" podCreationTimestamp="2025-01-14 13:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:06:50.494380529 +0000 UTC m=+12.213171174" watchObservedRunningTime="2025-01-14 13:06:50.494624431 +0000 UTC m=+12.213415076" Jan 14 13:06:52.143002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247148907.mount: Deactivated successfully. Jan 14 13:06:52.892202 containerd[1722]: time="2025-01-14T13:06:52.892146648Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:52.896384 containerd[1722]: time="2025-01-14T13:06:52.896318494Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764337" Jan 14 13:06:52.901782 containerd[1722]: time="2025-01-14T13:06:52.901719653Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:52.908640 containerd[1722]: time="2025-01-14T13:06:52.908585228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:52.909375 containerd[1722]: time="2025-01-14T13:06:52.909337836Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.525182708s" Jan 14 13:06:52.909457 containerd[1722]: time="2025-01-14T13:06:52.909379336Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 14 13:06:52.911498 containerd[1722]: time="2025-01-14T13:06:52.911341358Z" level=info msg="CreateContainer within sandbox \"57c5f1f5b8890889a557426e88b1eea73b48679bc4fcd4a6d5570e63e3de53a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 13:06:52.962424 containerd[1722]: time="2025-01-14T13:06:52.962377114Z" level=info msg="CreateContainer within sandbox \"57c5f1f5b8890889a557426e88b1eea73b48679bc4fcd4a6d5570e63e3de53a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9ff7e3a97b83b46f9f6f39fbbf6e596f6ca7d3b9e8b6189db744f367c02df84a\"" Jan 14 13:06:52.963122 containerd[1722]: time="2025-01-14T13:06:52.962944720Z" level=info msg="StartContainer for \"9ff7e3a97b83b46f9f6f39fbbf6e596f6ca7d3b9e8b6189db744f367c02df84a\"" Jan 14 13:06:52.991840 systemd[1]: Started cri-containerd-9ff7e3a97b83b46f9f6f39fbbf6e596f6ca7d3b9e8b6189db744f367c02df84a.scope - libcontainer container 9ff7e3a97b83b46f9f6f39fbbf6e596f6ca7d3b9e8b6189db744f367c02df84a. Jan 14 13:06:53.019821 containerd[1722]: time="2025-01-14T13:06:53.019775139Z" level=info msg="StartContainer for \"9ff7e3a97b83b46f9f6f39fbbf6e596f6ca7d3b9e8b6189db744f367c02df84a\" returns successfully" Jan 14 13:06:56.082948 kubelet[3426]: I0114 13:06:56.082664 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-xkfjw" podStartSLOduration=4.556009664 podStartE2EDuration="7.082601687s" podCreationTimestamp="2025-01-14 13:06:49 +0000 UTC" firstStartedPulling="2025-01-14 13:06:50.383118517 +0000 UTC m=+12.101909162" lastFinishedPulling="2025-01-14 13:06:52.90971054 +0000 UTC m=+14.628501185" observedRunningTime="2025-01-14 13:06:53.508833766 +0000 UTC m=+15.227624411" watchObservedRunningTime="2025-01-14 13:06:56.082601687 +0000 UTC m=+17.801392332" Jan 14 13:06:56.085711 kubelet[3426]: I0114 13:06:56.083953 3426 topology_manager.go:215] "Topology Admit Handler" podUID="b78683bf-40ca-4875-a6e8-7ab4314b62b5" podNamespace="calico-system" podName="calico-typha-76bd89d686-b9s8l" Jan 14 13:06:56.094357 systemd[1]: Created slice kubepods-besteffort-podb78683bf_40ca_4875_a6e8_7ab4314b62b5.slice - libcontainer container kubepods-besteffort-podb78683bf_40ca_4875_a6e8_7ab4314b62b5.slice. Jan 14 13:06:56.122172 kubelet[3426]: I0114 13:06:56.122126 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpj8m\" (UniqueName: \"kubernetes.io/projected/b78683bf-40ca-4875-a6e8-7ab4314b62b5-kube-api-access-zpj8m\") pod \"calico-typha-76bd89d686-b9s8l\" (UID: \"b78683bf-40ca-4875-a6e8-7ab4314b62b5\") " pod="calico-system/calico-typha-76bd89d686-b9s8l" Jan 14 13:06:56.123034 kubelet[3426]: I0114 13:06:56.123007 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b78683bf-40ca-4875-a6e8-7ab4314b62b5-typha-certs\") pod \"calico-typha-76bd89d686-b9s8l\" (UID: \"b78683bf-40ca-4875-a6e8-7ab4314b62b5\") " pod="calico-system/calico-typha-76bd89d686-b9s8l" Jan 14 13:06:56.123168 kubelet[3426]: I0114 13:06:56.123079 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78683bf-40ca-4875-a6e8-7ab4314b62b5-tigera-ca-bundle\") pod \"calico-typha-76bd89d686-b9s8l\" (UID: \"b78683bf-40ca-4875-a6e8-7ab4314b62b5\") " pod="calico-system/calico-typha-76bd89d686-b9s8l" Jan 14 13:06:56.356382 kubelet[3426]: I0114 13:06:56.355161 3426 topology_manager.go:215] "Topology Admit Handler" podUID="4727a238-71f1-4fbe-a95a-3040c1779a02" podNamespace="calico-system" podName="calico-node-hkwbz" Jan 14 13:06:56.366685 systemd[1]: Created slice kubepods-besteffort-pod4727a238_71f1_4fbe_a95a_3040c1779a02.slice - libcontainer container kubepods-besteffort-pod4727a238_71f1_4fbe_a95a_3040c1779a02.slice. Jan 14 13:06:56.399987 containerd[1722]: time="2025-01-14T13:06:56.399589983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76bd89d686-b9s8l,Uid:b78683bf-40ca-4875-a6e8-7ab4314b62b5,Namespace:calico-system,Attempt:0,}" Jan 14 13:06:56.426678 kubelet[3426]: I0114 13:06:56.426638 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4727a238-71f1-4fbe-a95a-3040c1779a02-node-certs\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426678 kubelet[3426]: I0114 13:06:56.426683 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-var-lib-calico\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426678 kubelet[3426]: I0114 13:06:56.426722 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-cni-net-dir\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426967 kubelet[3426]: I0114 13:06:56.426751 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-lib-modules\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426967 kubelet[3426]: I0114 13:06:56.426775 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-xtables-lock\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426967 kubelet[3426]: I0114 13:06:56.426804 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4727a238-71f1-4fbe-a95a-3040c1779a02-tigera-ca-bundle\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426967 kubelet[3426]: I0114 13:06:56.426830 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-var-run-calico\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.426967 kubelet[3426]: I0114 13:06:56.426856 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtq27\" (UniqueName: \"kubernetes.io/projected/4727a238-71f1-4fbe-a95a-3040c1779a02-kube-api-access-jtq27\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.427159 kubelet[3426]: I0114 13:06:56.426899 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-flexvol-driver-host\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.427159 kubelet[3426]: I0114 13:06:56.426940 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-cni-bin-dir\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.427159 kubelet[3426]: I0114 13:06:56.426967 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-policysync\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.427159 kubelet[3426]: I0114 13:06:56.426998 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4727a238-71f1-4fbe-a95a-3040c1779a02-cni-log-dir\") pod \"calico-node-hkwbz\" (UID: \"4727a238-71f1-4fbe-a95a-3040c1779a02\") " pod="calico-system/calico-node-hkwbz" Jan 14 13:06:56.474720 containerd[1722]: time="2025-01-14T13:06:56.473430618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:56.474720 containerd[1722]: time="2025-01-14T13:06:56.473493618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:56.474720 containerd[1722]: time="2025-01-14T13:06:56.473525419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:56.474720 containerd[1722]: time="2025-01-14T13:06:56.473610019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:56.511895 systemd[1]: Started cri-containerd-d541ff0bbae7bff6ab8cd79f648efb21edcac883e26346803f1a82a514887690.scope - libcontainer container d541ff0bbae7bff6ab8cd79f648efb21edcac883e26346803f1a82a514887690. Jan 14 13:06:56.539448 kubelet[3426]: I0114 13:06:56.539074 3426 topology_manager.go:215] "Topology Admit Handler" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" podNamespace="calico-system" podName="csi-node-driver-z6l9p" Jan 14 13:06:56.542974 kubelet[3426]: E0114 13:06:56.542947 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:06:56.559439 kubelet[3426]: E0114 13:06:56.559410 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.559664 kubelet[3426]: W0114 13:06:56.559587 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.559664 kubelet[3426]: E0114 13:06:56.559627 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.615185 containerd[1722]: time="2025-01-14T13:06:56.615043143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76bd89d686-b9s8l,Uid:b78683bf-40ca-4875-a6e8-7ab4314b62b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d541ff0bbae7bff6ab8cd79f648efb21edcac883e26346803f1a82a514887690\"" Jan 14 13:06:56.618880 kubelet[3426]: E0114 13:06:56.617935 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.618880 kubelet[3426]: W0114 13:06:56.617955 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.618880 kubelet[3426]: E0114 13:06:56.617979 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.619794 kubelet[3426]: E0114 13:06:56.619773 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.619979 kubelet[3426]: W0114 13:06:56.619901 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.619979 kubelet[3426]: E0114 13:06:56.619928 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.620580 kubelet[3426]: E0114 13:06:56.620413 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.620580 kubelet[3426]: W0114 13:06:56.620427 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.620580 kubelet[3426]: E0114 13:06:56.620442 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.620921 kubelet[3426]: E0114 13:06:56.620635 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.620921 kubelet[3426]: W0114 13:06:56.620646 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.620921 kubelet[3426]: E0114 13:06:56.620661 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.622370 kubelet[3426]: E0114 13:06:56.621312 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.622370 kubelet[3426]: W0114 13:06:56.621327 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.622370 kubelet[3426]: E0114 13:06:56.621342 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.622370 kubelet[3426]: E0114 13:06:56.621528 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.622370 kubelet[3426]: W0114 13:06:56.621542 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.622370 kubelet[3426]: E0114 13:06:56.621558 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.622994 containerd[1722]: time="2025-01-14T13:06:56.621754392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622392 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.623189 kubelet[3426]: W0114 13:06:56.622403 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622422 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622650 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.623189 kubelet[3426]: W0114 13:06:56.622660 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622676 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622911 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.623189 kubelet[3426]: W0114 13:06:56.622922 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.623189 kubelet[3426]: E0114 13:06:56.622938 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.623801 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.624977 kubelet[3426]: W0114 13:06:56.623817 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.623833 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.624027 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.624977 kubelet[3426]: W0114 13:06:56.624038 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.624054 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.624256 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.624977 kubelet[3426]: W0114 13:06:56.624266 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.624282 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.624977 kubelet[3426]: E0114 13:06:56.624485 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.625460 kubelet[3426]: W0114 13:06:56.624497 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.625460 kubelet[3426]: E0114 13:06:56.624512 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.625460 kubelet[3426]: E0114 13:06:56.624943 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.625460 kubelet[3426]: W0114 13:06:56.624955 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.625460 kubelet[3426]: E0114 13:06:56.624971 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.625936 kubelet[3426]: E0114 13:06:56.625916 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.625936 kubelet[3426]: W0114 13:06:56.625931 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.626059 kubelet[3426]: E0114 13:06:56.625949 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.626334 kubelet[3426]: E0114 13:06:56.626138 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.626334 kubelet[3426]: W0114 13:06:56.626150 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.626334 kubelet[3426]: E0114 13:06:56.626165 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.626551 kubelet[3426]: E0114 13:06:56.626374 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.626551 kubelet[3426]: W0114 13:06:56.626385 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.626551 kubelet[3426]: E0114 13:06:56.626400 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.627630 kubelet[3426]: E0114 13:06:56.626586 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.627630 kubelet[3426]: W0114 13:06:56.626596 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.627630 kubelet[3426]: E0114 13:06:56.626611 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.627903 kubelet[3426]: E0114 13:06:56.627884 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.627903 kubelet[3426]: W0114 13:06:56.627904 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.628110 kubelet[3426]: E0114 13:06:56.627920 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.628183 kubelet[3426]: E0114 13:06:56.628119 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.628183 kubelet[3426]: W0114 13:06:56.628129 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.628183 kubelet[3426]: E0114 13:06:56.628145 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.629494 kubelet[3426]: E0114 13:06:56.629475 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.629494 kubelet[3426]: W0114 13:06:56.629494 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.629623 kubelet[3426]: E0114 13:06:56.629511 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.629623 kubelet[3426]: I0114 13:06:56.629547 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a334eebb-fcba-4d16-8280-bef7ba8849b0-socket-dir\") pod \"csi-node-driver-z6l9p\" (UID: \"a334eebb-fcba-4d16-8280-bef7ba8849b0\") " pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:06:56.629863 kubelet[3426]: E0114 13:06:56.629843 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.629863 kubelet[3426]: W0114 13:06:56.629863 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.629980 kubelet[3426]: E0114 13:06:56.629892 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.629980 kubelet[3426]: I0114 13:06:56.629921 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a334eebb-fcba-4d16-8280-bef7ba8849b0-registration-dir\") pod \"csi-node-driver-z6l9p\" (UID: \"a334eebb-fcba-4d16-8280-bef7ba8849b0\") " pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:06:56.631042 kubelet[3426]: E0114 13:06:56.630527 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.631042 kubelet[3426]: W0114 13:06:56.630545 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.631042 kubelet[3426]: E0114 13:06:56.630671 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.632067 kubelet[3426]: E0114 13:06:56.632047 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.632067 kubelet[3426]: W0114 13:06:56.632066 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.632578 kubelet[3426]: E0114 13:06:56.632556 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.634791 kubelet[3426]: E0114 13:06:56.634771 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.634791 kubelet[3426]: W0114 13:06:56.634789 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.634997 kubelet[3426]: E0114 13:06:56.634979 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.635065 kubelet[3426]: I0114 13:06:56.635018 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqwg\" (UniqueName: \"kubernetes.io/projected/a334eebb-fcba-4d16-8280-bef7ba8849b0-kube-api-access-2kqwg\") pod \"csi-node-driver-z6l9p\" (UID: \"a334eebb-fcba-4d16-8280-bef7ba8849b0\") " pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:06:56.635114 kubelet[3426]: E0114 13:06:56.635097 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.635114 kubelet[3426]: W0114 13:06:56.635105 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.635196 kubelet[3426]: E0114 13:06:56.635133 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.636725 kubelet[3426]: E0114 13:06:56.635356 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.636725 kubelet[3426]: W0114 13:06:56.635370 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.636725 kubelet[3426]: E0114 13:06:56.635400 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.636725 kubelet[3426]: E0114 13:06:56.635630 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.636725 kubelet[3426]: W0114 13:06:56.635640 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.636725 kubelet[3426]: E0114 13:06:56.635664 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.636725 kubelet[3426]: I0114 13:06:56.635792 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a334eebb-fcba-4d16-8280-bef7ba8849b0-varrun\") pod \"csi-node-driver-z6l9p\" (UID: \"a334eebb-fcba-4d16-8280-bef7ba8849b0\") " pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:06:56.636725 kubelet[3426]: E0114 13:06:56.635936 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.636725 kubelet[3426]: W0114 13:06:56.635948 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.635968 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.636183 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637250 kubelet[3426]: W0114 13:06:56.636197 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.636224 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.636429 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637250 kubelet[3426]: W0114 13:06:56.636439 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.636469 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637250 kubelet[3426]: I0114 13:06:56.636494 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a334eebb-fcba-4d16-8280-bef7ba8849b0-kubelet-dir\") pod \"csi-node-driver-z6l9p\" (UID: \"a334eebb-fcba-4d16-8280-bef7ba8849b0\") " pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:06:56.637250 kubelet[3426]: E0114 13:06:56.636779 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637685 kubelet[3426]: W0114 13:06:56.636792 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.636812 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.637016 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637685 kubelet[3426]: W0114 13:06:56.637025 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.637045 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.637250 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637685 kubelet[3426]: W0114 13:06:56.637260 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.637277 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.637685 kubelet[3426]: E0114 13:06:56.637458 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.637685 kubelet[3426]: W0114 13:06:56.637468 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.638173 kubelet[3426]: E0114 13:06:56.637483 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.671639 containerd[1722]: time="2025-01-14T13:06:56.671588253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkwbz,Uid:4727a238-71f1-4fbe-a95a-3040c1779a02,Namespace:calico-system,Attempt:0,}" Jan 14 13:06:56.737463 containerd[1722]: time="2025-01-14T13:06:56.736975627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:56.738454 containerd[1722]: time="2025-01-14T13:06:56.737241728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:56.738454 containerd[1722]: time="2025-01-14T13:06:56.737383630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:56.738604 kubelet[3426]: E0114 13:06:56.738179 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.738604 kubelet[3426]: W0114 13:06:56.738201 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.738604 kubelet[3426]: E0114 13:06:56.738251 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.739198 containerd[1722]: time="2025-01-14T13:06:56.738385737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:56.739265 kubelet[3426]: E0114 13:06:56.738720 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.739265 kubelet[3426]: W0114 13:06:56.738732 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.739265 kubelet[3426]: E0114 13:06:56.738772 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.739265 kubelet[3426]: E0114 13:06:56.739042 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.739265 kubelet[3426]: W0114 13:06:56.739054 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.739468 kubelet[3426]: E0114 13:06:56.739284 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.739468 kubelet[3426]: E0114 13:06:56.739387 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.739468 kubelet[3426]: W0114 13:06:56.739396 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.739468 kubelet[3426]: E0114 13:06:56.739423 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.739723 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.741211 kubelet[3426]: W0114 13:06:56.739739 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.739786 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.740041 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.741211 kubelet[3426]: W0114 13:06:56.740053 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.740107 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.740468 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.741211 kubelet[3426]: W0114 13:06:56.740480 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.740606 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.741211 kubelet[3426]: E0114 13:06:56.741144 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.742313 kubelet[3426]: W0114 13:06:56.741157 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.742313 kubelet[3426]: E0114 13:06:56.741884 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.742313 kubelet[3426]: E0114 13:06:56.742184 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.742313 kubelet[3426]: W0114 13:06:56.742198 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.742850 kubelet[3426]: E0114 13:06:56.742489 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.743118 kubelet[3426]: E0114 13:06:56.743033 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.743118 kubelet[3426]: W0114 13:06:56.743046 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.743315 kubelet[3426]: E0114 13:06:56.743232 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.743434 kubelet[3426]: E0114 13:06:56.743403 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.743434 kubelet[3426]: W0114 13:06:56.743412 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.743686 kubelet[3426]: E0114 13:06:56.743676 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.743953 kubelet[3426]: E0114 13:06:56.743851 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.743953 kubelet[3426]: W0114 13:06:56.743861 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.743953 kubelet[3426]: E0114 13:06:56.743901 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.744392 kubelet[3426]: E0114 13:06:56.744305 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.744392 kubelet[3426]: W0114 13:06:56.744334 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.744392 kubelet[3426]: E0114 13:06:56.744356 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.744806 kubelet[3426]: E0114 13:06:56.744763 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.744806 kubelet[3426]: W0114 13:06:56.744785 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.745065 kubelet[3426]: E0114 13:06:56.745042 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.745503 kubelet[3426]: E0114 13:06:56.745383 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.745503 kubelet[3426]: W0114 13:06:56.745397 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.745756 kubelet[3426]: E0114 13:06:56.745626 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.745988 kubelet[3426]: E0114 13:06:56.745963 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.746233 kubelet[3426]: W0114 13:06:56.746151 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.746600 kubelet[3426]: E0114 13:06:56.746391 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.746913 kubelet[3426]: E0114 13:06:56.746899 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.747085 kubelet[3426]: W0114 13:06:56.747010 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.747231 kubelet[3426]: E0114 13:06:56.747219 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.747559 kubelet[3426]: E0114 13:06:56.747477 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.747559 kubelet[3426]: W0114 13:06:56.747509 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.747792 kubelet[3426]: E0114 13:06:56.747674 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.749948 kubelet[3426]: E0114 13:06:56.749933 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.750254 kubelet[3426]: W0114 13:06:56.750175 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.750989 kubelet[3426]: E0114 13:06:56.750922 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.750989 kubelet[3426]: W0114 13:06:56.750937 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.751798 kubelet[3426]: E0114 13:06:56.751782 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.751923 kubelet[3426]: W0114 13:06:56.751908 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.753522 kubelet[3426]: E0114 13:06:56.753498 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.753522 kubelet[3426]: W0114 13:06:56.753515 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.753658 kubelet[3426]: E0114 13:06:56.753533 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.753658 kubelet[3426]: E0114 13:06:56.753564 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.754628 kubelet[3426]: E0114 13:06:56.754600 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.754628 kubelet[3426]: W0114 13:06:56.754626 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.754628 kubelet[3426]: E0114 13:06:56.754643 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.755264 kubelet[3426]: E0114 13:06:56.755244 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.755355 kubelet[3426]: E0114 13:06:56.755280 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.756285 kubelet[3426]: E0114 13:06:56.756263 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.756285 kubelet[3426]: W0114 13:06:56.756283 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.756446 kubelet[3426]: E0114 13:06:56.756301 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.757647 kubelet[3426]: E0114 13:06:56.757626 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.757647 kubelet[3426]: W0114 13:06:56.757646 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.757864 kubelet[3426]: E0114 13:06:56.757663 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.761720 kubelet[3426]: E0114 13:06:56.761671 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:06:56.761720 kubelet[3426]: W0114 13:06:56.761702 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:06:56.761886 kubelet[3426]: E0114 13:06:56.761728 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:06:56.779054 systemd[1]: Started cri-containerd-0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc.scope - libcontainer container 0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc. Jan 14 13:06:56.817183 containerd[1722]: time="2025-01-14T13:06:56.817098307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkwbz,Uid:4727a238-71f1-4fbe-a95a-3040c1779a02,Namespace:calico-system,Attempt:0,} returns sandbox id \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\"" Jan 14 13:06:58.420627 kubelet[3426]: E0114 13:06:58.420494 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:06:59.011083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14599282.mount: Deactivated successfully. Jan 14 13:06:59.793039 containerd[1722]: time="2025-01-14T13:06:59.792985655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:59.797685 containerd[1722]: time="2025-01-14T13:06:59.797617204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 14 13:06:59.802248 containerd[1722]: time="2025-01-14T13:06:59.802212153Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:59.809026 containerd[1722]: time="2025-01-14T13:06:59.808959925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:59.810138 containerd[1722]: time="2025-01-14T13:06:59.809645432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.18785504s" Jan 14 13:06:59.810138 containerd[1722]: time="2025-01-14T13:06:59.809682532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 14 13:06:59.810803 containerd[1722]: time="2025-01-14T13:06:59.810554042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 14 13:06:59.828273 containerd[1722]: time="2025-01-14T13:06:59.828082928Z" level=info msg="CreateContainer within sandbox \"d541ff0bbae7bff6ab8cd79f648efb21edcac883e26346803f1a82a514887690\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 13:06:59.882263 containerd[1722]: time="2025-01-14T13:06:59.882212604Z" level=info msg="CreateContainer within sandbox \"d541ff0bbae7bff6ab8cd79f648efb21edcac883e26346803f1a82a514887690\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"650f4f19c6b9626838c82935f7211da132e43bb4f7b1966d8f34b0264ea230ac\"" Jan 14 13:06:59.883070 containerd[1722]: time="2025-01-14T13:06:59.882865311Z" level=info msg="StartContainer for \"650f4f19c6b9626838c82935f7211da132e43bb4f7b1966d8f34b0264ea230ac\"" Jan 14 13:06:59.912873 systemd[1]: Started cri-containerd-650f4f19c6b9626838c82935f7211da132e43bb4f7b1966d8f34b0264ea230ac.scope - libcontainer container 650f4f19c6b9626838c82935f7211da132e43bb4f7b1966d8f34b0264ea230ac. Jan 14 13:06:59.958503 containerd[1722]: time="2025-01-14T13:06:59.958343414Z" level=info msg="StartContainer for \"650f4f19c6b9626838c82935f7211da132e43bb4f7b1966d8f34b0264ea230ac\" returns successfully" Jan 14 13:07:00.421047 kubelet[3426]: E0114 13:07:00.420024 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:00.557099 kubelet[3426]: E0114 13:07:00.557059 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.557301 kubelet[3426]: W0114 13:07:00.557283 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.557427 kubelet[3426]: E0114 13:07:00.557413 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.558721 kubelet[3426]: E0114 13:07:00.557898 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.558927 kubelet[3426]: W0114 13:07:00.558860 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.558927 kubelet[3426]: E0114 13:07:00.558891 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.559317 kubelet[3426]: E0114 13:07:00.559251 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.559317 kubelet[3426]: W0114 13:07:00.559265 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.559317 kubelet[3426]: E0114 13:07:00.559282 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.559810 kubelet[3426]: E0114 13:07:00.559674 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.559810 kubelet[3426]: W0114 13:07:00.559702 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.559810 kubelet[3426]: E0114 13:07:00.559728 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.562146 kubelet[3426]: E0114 13:07:00.562033 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.562146 kubelet[3426]: W0114 13:07:00.562055 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.562146 kubelet[3426]: E0114 13:07:00.562073 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.562545 kubelet[3426]: E0114 13:07:00.562451 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.562545 kubelet[3426]: W0114 13:07:00.562477 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.562545 kubelet[3426]: E0114 13:07:00.562494 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.562988 kubelet[3426]: E0114 13:07:00.562899 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.562988 kubelet[3426]: W0114 13:07:00.562925 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.562988 kubelet[3426]: E0114 13:07:00.562941 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.563459 kubelet[3426]: E0114 13:07:00.563335 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.563459 kubelet[3426]: W0114 13:07:00.563348 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.563459 kubelet[3426]: E0114 13:07:00.563363 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.563835 kubelet[3426]: E0114 13:07:00.563771 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.563835 kubelet[3426]: W0114 13:07:00.563787 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.563835 kubelet[3426]: E0114 13:07:00.563803 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.565237 kubelet[3426]: E0114 13:07:00.565141 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.565237 kubelet[3426]: W0114 13:07:00.565155 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.565237 kubelet[3426]: E0114 13:07:00.565172 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.565665 kubelet[3426]: E0114 13:07:00.565542 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.565665 kubelet[3426]: W0114 13:07:00.565556 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.565665 kubelet[3426]: E0114 13:07:00.565572 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.565890 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.566563 kubelet[3426]: W0114 13:07:00.565903 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.565918 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.566156 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.566563 kubelet[3426]: W0114 13:07:00.566167 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.566183 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.566366 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.566563 kubelet[3426]: W0114 13:07:00.566376 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.566390 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.566563 kubelet[3426]: E0114 13:07:00.566569 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.567076 kubelet[3426]: W0114 13:07:00.566579 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.567076 kubelet[3426]: E0114 13:07:00.566594 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.569674 kubelet[3426]: E0114 13:07:00.569195 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.569674 kubelet[3426]: W0114 13:07:00.569223 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.569674 kubelet[3426]: E0114 13:07:00.569239 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.570301 kubelet[3426]: E0114 13:07:00.570046 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.570301 kubelet[3426]: W0114 13:07:00.570061 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.570301 kubelet[3426]: E0114 13:07:00.570094 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.570988 kubelet[3426]: E0114 13:07:00.570706 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.570988 kubelet[3426]: W0114 13:07:00.570721 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.570988 kubelet[3426]: E0114 13:07:00.570775 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.571471 kubelet[3426]: E0114 13:07:00.571375 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.571471 kubelet[3426]: W0114 13:07:00.571388 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.571902 kubelet[3426]: E0114 13:07:00.571615 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.572156 kubelet[3426]: E0114 13:07:00.572018 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.572156 kubelet[3426]: W0114 13:07:00.572030 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.572302 kubelet[3426]: E0114 13:07:00.572247 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.572595 kubelet[3426]: E0114 13:07:00.572506 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.572595 kubelet[3426]: W0114 13:07:00.572518 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.572782 kubelet[3426]: E0114 13:07:00.572683 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.573017 kubelet[3426]: E0114 13:07:00.572893 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.573017 kubelet[3426]: W0114 13:07:00.572904 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.573017 kubelet[3426]: E0114 13:07:00.572921 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.573353 kubelet[3426]: E0114 13:07:00.573255 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.573353 kubelet[3426]: W0114 13:07:00.573268 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.573353 kubelet[3426]: E0114 13:07:00.573306 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.573762 kubelet[3426]: E0114 13:07:00.573680 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.573762 kubelet[3426]: W0114 13:07:00.573704 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.574064 kubelet[3426]: E0114 13:07:00.573799 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.574276 kubelet[3426]: E0114 13:07:00.574264 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.574363 kubelet[3426]: W0114 13:07:00.574351 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.574543 kubelet[3426]: E0114 13:07:00.574513 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.574673 kubelet[3426]: E0114 13:07:00.574620 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.574673 kubelet[3426]: W0114 13:07:00.574629 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.574922 kubelet[3426]: E0114 13:07:00.574867 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.575132 kubelet[3426]: E0114 13:07:00.575103 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.575132 kubelet[3426]: W0114 13:07:00.575116 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.575475 kubelet[3426]: E0114 13:07:00.575410 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.576009 kubelet[3426]: E0114 13:07:00.575863 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.576009 kubelet[3426]: W0114 13:07:00.575880 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.576009 kubelet[3426]: E0114 13:07:00.575905 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.576199 kubelet[3426]: E0114 13:07:00.576117 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.576199 kubelet[3426]: W0114 13:07:00.576128 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.576199 kubelet[3426]: E0114 13:07:00.576144 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.576740 kubelet[3426]: E0114 13:07:00.576515 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.576740 kubelet[3426]: W0114 13:07:00.576528 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.576740 kubelet[3426]: E0114 13:07:00.576609 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.576900 kubelet[3426]: E0114 13:07:00.576789 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.576900 kubelet[3426]: W0114 13:07:00.576799 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.576900 kubelet[3426]: E0114 13:07:00.576814 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.577039 kubelet[3426]: E0114 13:07:00.577016 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.577039 kubelet[3426]: W0114 13:07:00.577025 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.577039 kubelet[3426]: E0114 13:07:00.577040 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.578424 kubelet[3426]: E0114 13:07:00.578022 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:00.578424 kubelet[3426]: W0114 13:07:00.578037 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:00.578424 kubelet[3426]: E0114 13:07:00.578053 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:00.579848 kubelet[3426]: I0114 13:07:00.579365 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-76bd89d686-b9s8l" podStartSLOduration=1.3896300639999999 podStartE2EDuration="4.579319719s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:06:56.620377982 +0000 UTC m=+18.339168627" lastFinishedPulling="2025-01-14 13:06:59.810067637 +0000 UTC m=+21.528858282" observedRunningTime="2025-01-14 13:07:00.550638614 +0000 UTC m=+22.269429259" watchObservedRunningTime="2025-01-14 13:07:00.579319719 +0000 UTC m=+22.298110364" Jan 14 13:07:01.510462 containerd[1722]: time="2025-01-14T13:07:01.510166921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:01.516066 containerd[1722]: time="2025-01-14T13:07:01.515950283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 14 13:07:01.525262 containerd[1722]: time="2025-01-14T13:07:01.525203281Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:01.533352 containerd[1722]: time="2025-01-14T13:07:01.533294767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:01.534063 containerd[1722]: time="2025-01-14T13:07:01.533919474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.723326932s" Jan 14 13:07:01.534063 containerd[1722]: time="2025-01-14T13:07:01.533959074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 14 13:07:01.535800 containerd[1722]: time="2025-01-14T13:07:01.535685193Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:07:01.574655 kubelet[3426]: E0114 13:07:01.574626 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.574655 kubelet[3426]: W0114 13:07:01.574649 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.575413 kubelet[3426]: E0114 13:07:01.574677 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.575413 kubelet[3426]: E0114 13:07:01.574941 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.575413 kubelet[3426]: W0114 13:07:01.574955 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.575413 kubelet[3426]: E0114 13:07:01.574981 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.575413 kubelet[3426]: E0114 13:07:01.575184 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.575413 kubelet[3426]: W0114 13:07:01.575194 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.575413 kubelet[3426]: E0114 13:07:01.575210 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.575718 kubelet[3426]: E0114 13:07:01.575431 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.575718 kubelet[3426]: W0114 13:07:01.575441 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.575718 kubelet[3426]: E0114 13:07:01.575458 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.575860 kubelet[3426]: E0114 13:07:01.575731 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.575860 kubelet[3426]: W0114 13:07:01.575742 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.575860 kubelet[3426]: E0114 13:07:01.575759 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.576085 kubelet[3426]: E0114 13:07:01.575937 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.576085 kubelet[3426]: W0114 13:07:01.575946 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.576085 kubelet[3426]: E0114 13:07:01.575961 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.576296 kubelet[3426]: E0114 13:07:01.576141 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.576296 kubelet[3426]: W0114 13:07:01.576151 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.576296 kubelet[3426]: E0114 13:07:01.576166 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.576519 kubelet[3426]: E0114 13:07:01.576341 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.576519 kubelet[3426]: W0114 13:07:01.576352 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.576519 kubelet[3426]: E0114 13:07:01.576367 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.576751 kubelet[3426]: E0114 13:07:01.576597 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.576751 kubelet[3426]: W0114 13:07:01.576607 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.576751 kubelet[3426]: E0114 13:07:01.576622 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.576973 kubelet[3426]: E0114 13:07:01.576811 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.576973 kubelet[3426]: W0114 13:07:01.576821 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.576973 kubelet[3426]: E0114 13:07:01.576836 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.577168 kubelet[3426]: E0114 13:07:01.577016 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.577168 kubelet[3426]: W0114 13:07:01.577025 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.577168 kubelet[3426]: E0114 13:07:01.577040 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.577414 kubelet[3426]: E0114 13:07:01.577210 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.577414 kubelet[3426]: W0114 13:07:01.577220 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.577414 kubelet[3426]: E0114 13:07:01.577235 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.577592 kubelet[3426]: E0114 13:07:01.577416 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.577592 kubelet[3426]: W0114 13:07:01.577426 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.577592 kubelet[3426]: E0114 13:07:01.577441 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.577877 kubelet[3426]: E0114 13:07:01.577850 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.577877 kubelet[3426]: W0114 13:07:01.577868 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.578043 kubelet[3426]: E0114 13:07:01.577884 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.578099 kubelet[3426]: E0114 13:07:01.578070 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.578099 kubelet[3426]: W0114 13:07:01.578080 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.578099 kubelet[3426]: E0114 13:07:01.578097 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.578419 kubelet[3426]: E0114 13:07:01.578401 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.578419 kubelet[3426]: W0114 13:07:01.578414 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.578626 kubelet[3426]: E0114 13:07:01.578431 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.578677 kubelet[3426]: E0114 13:07:01.578656 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.578677 kubelet[3426]: W0114 13:07:01.578666 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.578817 kubelet[3426]: E0114 13:07:01.578774 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.579025 kubelet[3426]: E0114 13:07:01.579010 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.579025 kubelet[3426]: W0114 13:07:01.579023 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.579117 kubelet[3426]: E0114 13:07:01.579043 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.579289 kubelet[3426]: E0114 13:07:01.579272 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.579289 kubelet[3426]: W0114 13:07:01.579285 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.579397 kubelet[3426]: E0114 13:07:01.579308 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.579551 kubelet[3426]: E0114 13:07:01.579534 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.579551 kubelet[3426]: W0114 13:07:01.579547 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.579567 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.579779 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581306 kubelet[3426]: W0114 13:07:01.579789 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.579804 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.580031 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581306 kubelet[3426]: W0114 13:07:01.580043 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.580084 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.580280 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581306 kubelet[3426]: W0114 13:07:01.580289 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581306 kubelet[3426]: E0114 13:07:01.580315 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.580481 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581831 kubelet[3426]: W0114 13:07:01.580491 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.580518 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.580686 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581831 kubelet[3426]: W0114 13:07:01.580715 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.580734 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.581017 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.581831 kubelet[3426]: W0114 13:07:01.581025 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.581044 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.581831 kubelet[3426]: E0114 13:07:01.581259 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582299 kubelet[3426]: W0114 13:07:01.581270 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.581289 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.581726 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582299 kubelet[3426]: W0114 13:07:01.581739 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.581828 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.581983 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582299 kubelet[3426]: W0114 13:07:01.581992 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.582084 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.582299 kubelet[3426]: E0114 13:07:01.582223 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582299 kubelet[3426]: W0114 13:07:01.582233 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582759 kubelet[3426]: E0114 13:07:01.582258 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.582759 kubelet[3426]: E0114 13:07:01.582465 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582759 kubelet[3426]: W0114 13:07:01.582475 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582759 kubelet[3426]: E0114 13:07:01.582491 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.582759 kubelet[3426]: E0114 13:07:01.582708 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.582759 kubelet[3426]: W0114 13:07:01.582719 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.582759 kubelet[3426]: E0114 13:07:01.582734 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.583241 kubelet[3426]: E0114 13:07:01.583225 3426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:07:01.583241 kubelet[3426]: W0114 13:07:01.583237 3426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:07:01.583381 kubelet[3426]: E0114 13:07:01.583253 3426 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:07:01.583532 containerd[1722]: time="2025-01-14T13:07:01.583495901Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74\"" Jan 14 13:07:01.584160 containerd[1722]: time="2025-01-14T13:07:01.584131608Z" level=info msg="StartContainer for \"6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74\"" Jan 14 13:07:01.620106 systemd[1]: run-containerd-runc-k8s.io-6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74-runc.9xzRsK.mount: Deactivated successfully. Jan 14 13:07:01.631844 systemd[1]: Started cri-containerd-6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74.scope - libcontainer container 6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74. Jan 14 13:07:01.670876 containerd[1722]: time="2025-01-14T13:07:01.670662228Z" level=info msg="StartContainer for \"6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74\" returns successfully" Jan 14 13:07:01.678158 systemd[1]: cri-containerd-6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74.scope: Deactivated successfully. Jan 14 13:07:02.421334 kubelet[3426]: E0114 13:07:02.419826 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:02.561948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74-rootfs.mount: Deactivated successfully. Jan 14 13:07:02.960600 containerd[1722]: time="2025-01-14T13:07:02.960529770Z" level=info msg="shim disconnected" id=6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74 namespace=k8s.io Jan 14 13:07:02.960600 containerd[1722]: time="2025-01-14T13:07:02.960594371Z" level=warning msg="cleaning up after shim disconnected" id=6ace47daf1f29b9f32b263501481ebf6526f082a4a4d5468db11440b82559b74 namespace=k8s.io Jan 14 13:07:02.960600 containerd[1722]: time="2025-01-14T13:07:02.960605471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:03.533295 containerd[1722]: time="2025-01-14T13:07:03.533253306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 14 13:07:04.421117 kubelet[3426]: E0114 13:07:04.419778 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:06.421184 kubelet[3426]: E0114 13:07:06.419847 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:08.420187 kubelet[3426]: E0114 13:07:08.420154 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:09.009563 containerd[1722]: time="2025-01-14T13:07:09.009519505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:09.017871 containerd[1722]: time="2025-01-14T13:07:09.017827095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 14 13:07:09.021713 containerd[1722]: time="2025-01-14T13:07:09.021619936Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:09.027485 containerd[1722]: time="2025-01-14T13:07:09.027409398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:09.028555 containerd[1722]: time="2025-01-14T13:07:09.028080605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.494781698s" Jan 14 13:07:09.028555 containerd[1722]: time="2025-01-14T13:07:09.028116306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 14 13:07:09.030485 containerd[1722]: time="2025-01-14T13:07:09.030455631Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:07:09.080615 containerd[1722]: time="2025-01-14T13:07:09.080565571Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60\"" Jan 14 13:07:09.081249 containerd[1722]: time="2025-01-14T13:07:09.081221778Z" level=info msg="StartContainer for \"ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60\"" Jan 14 13:07:09.115009 systemd[1]: run-containerd-runc-k8s.io-ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60-runc.uKc9sZ.mount: Deactivated successfully. Jan 14 13:07:09.122849 systemd[1]: Started cri-containerd-ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60.scope - libcontainer container ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60. Jan 14 13:07:09.156450 containerd[1722]: time="2025-01-14T13:07:09.156395288Z" level=info msg="StartContainer for \"ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60\" returns successfully" Jan 14 13:07:10.422992 kubelet[3426]: E0114 13:07:10.421889 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:10.600175 containerd[1722]: time="2025-01-14T13:07:10.600112158Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:07:10.602536 systemd[1]: cri-containerd-ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60.scope: Deactivated successfully. Jan 14 13:07:10.628345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60-rootfs.mount: Deactivated successfully. Jan 14 13:07:10.666257 kubelet[3426]: I0114 13:07:10.666023 3426 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:07:11.131622 kubelet[3426]: I0114 13:07:10.713302 3426 topology_manager.go:215] "Topology Admit Handler" podUID="400f3e16-4883-45bf-811c-322b770038b8" podNamespace="kube-system" podName="coredns-76f75df574-txqcv" Jan 14 13:07:11.131622 kubelet[3426]: I0114 13:07:10.715548 3426 topology_manager.go:215] "Topology Admit Handler" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" podNamespace="kube-system" podName="coredns-76f75df574-g846c" Jan 14 13:07:11.131622 kubelet[3426]: I0114 13:07:10.717249 3426 topology_manager.go:215] "Topology Admit Handler" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" podNamespace="calico-system" podName="calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:11.131622 kubelet[3426]: I0114 13:07:10.717618 3426 topology_manager.go:215] "Topology Admit Handler" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" podNamespace="calico-apiserver" podName="calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:11.131622 kubelet[3426]: I0114 13:07:10.719753 3426 topology_manager.go:215] "Topology Admit Handler" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" podNamespace="calico-apiserver" podName="calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:11.131622 kubelet[3426]: W0114 13:07:10.721388 3426 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-f264a924af" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-f264a924af' and this object Jan 14 13:07:11.131622 kubelet[3426]: E0114 13:07:10.721419 3426 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-f264a924af" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-f264a924af' and this object Jan 14 13:07:10.730184 systemd[1]: Created slice kubepods-burstable-pod400f3e16_4883_45bf_811c_322b770038b8.slice - libcontainer container kubepods-burstable-pod400f3e16_4883_45bf_811c_322b770038b8.slice. Jan 14 13:07:11.132286 kubelet[3426]: I0114 13:07:10.845234 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4jd\" (UniqueName: \"kubernetes.io/projected/1b81bba4-2ff3-462c-8b85-47035070eff8-kube-api-access-lb4jd\") pod \"calico-kube-controllers-fddf9dc45-phpck\" (UID: \"1b81bba4-2ff3-462c-8b85-47035070eff8\") " pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:11.132286 kubelet[3426]: I0114 13:07:10.845350 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de7d327-5d34-4f9c-b581-c52c0a00d0b7-config-volume\") pod \"coredns-76f75df574-g846c\" (UID: \"5de7d327-5d34-4f9c-b581-c52c0a00d0b7\") " pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:11.132286 kubelet[3426]: I0114 13:07:10.845396 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/400f3e16-4883-45bf-811c-322b770038b8-config-volume\") pod \"coredns-76f75df574-txqcv\" (UID: \"400f3e16-4883-45bf-811c-322b770038b8\") " pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:11.132286 kubelet[3426]: I0114 13:07:10.845422 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f24a6417-91e6-4261-aa71-8c79526d4ae0-calico-apiserver-certs\") pod \"calico-apiserver-6dcf9d67d5-qgrsw\" (UID: \"f24a6417-91e6-4261-aa71-8c79526d4ae0\") " pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:11.132286 kubelet[3426]: I0114 13:07:10.845457 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmtll\" (UniqueName: \"kubernetes.io/projected/1798181d-c5b0-4589-8e4b-80c339c21d34-kube-api-access-bmtll\") pod \"calico-apiserver-6dcf9d67d5-mfw8m\" (UID: \"1798181d-c5b0-4589-8e4b-80c339c21d34\") " pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:10.739316 systemd[1]: Created slice kubepods-burstable-pod5de7d327_5d34_4f9c_b581_c52c0a00d0b7.slice - libcontainer container kubepods-burstable-pod5de7d327_5d34_4f9c_b581_c52c0a00d0b7.slice. Jan 14 13:07:11.132656 kubelet[3426]: I0114 13:07:10.845492 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9nd\" (UniqueName: \"kubernetes.io/projected/5de7d327-5d34-4f9c-b581-c52c0a00d0b7-kube-api-access-lb9nd\") pod \"coredns-76f75df574-g846c\" (UID: \"5de7d327-5d34-4f9c-b581-c52c0a00d0b7\") " pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:11.132656 kubelet[3426]: I0114 13:07:10.845613 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsztl\" (UniqueName: \"kubernetes.io/projected/400f3e16-4883-45bf-811c-322b770038b8-kube-api-access-zsztl\") pod \"coredns-76f75df574-txqcv\" (UID: \"400f3e16-4883-45bf-811c-322b770038b8\") " pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:11.132656 kubelet[3426]: I0114 13:07:10.845669 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b81bba4-2ff3-462c-8b85-47035070eff8-tigera-ca-bundle\") pod \"calico-kube-controllers-fddf9dc45-phpck\" (UID: \"1b81bba4-2ff3-462c-8b85-47035070eff8\") " pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:11.132656 kubelet[3426]: I0114 13:07:10.845719 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcffh\" (UniqueName: \"kubernetes.io/projected/f24a6417-91e6-4261-aa71-8c79526d4ae0-kube-api-access-gcffh\") pod \"calico-apiserver-6dcf9d67d5-qgrsw\" (UID: \"f24a6417-91e6-4261-aa71-8c79526d4ae0\") " pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:11.132656 kubelet[3426]: I0114 13:07:10.845758 3426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1798181d-c5b0-4589-8e4b-80c339c21d34-calico-apiserver-certs\") pod \"calico-apiserver-6dcf9d67d5-mfw8m\" (UID: \"1798181d-c5b0-4589-8e4b-80c339c21d34\") " pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:10.750833 systemd[1]: Created slice kubepods-besteffort-pod1b81bba4_2ff3_462c_8b85_47035070eff8.slice - libcontainer container kubepods-besteffort-pod1b81bba4_2ff3_462c_8b85_47035070eff8.slice. Jan 14 13:07:10.759576 systemd[1]: Created slice kubepods-besteffort-podf24a6417_91e6_4261_aa71_8c79526d4ae0.slice - libcontainer container kubepods-besteffort-podf24a6417_91e6_4261_aa71_8c79526d4ae0.slice. Jan 14 13:07:10.773205 systemd[1]: Created slice kubepods-besteffort-pod1798181d_c5b0_4589_8e4b_80c339c21d34.slice - libcontainer container kubepods-besteffort-pod1798181d_c5b0_4589_8e4b_80c339c21d34.slice. Jan 14 13:07:11.438827 containerd[1722]: time="2025-01-14T13:07:11.438682716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:0,}" Jan 14 13:07:11.454786 containerd[1722]: time="2025-01-14T13:07:11.454737512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:07:11.471950 containerd[1722]: time="2025-01-14T13:07:11.471896022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:07:11.734372 containerd[1722]: time="2025-01-14T13:07:11.734224932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:11.754295 containerd[1722]: time="2025-01-14T13:07:11.754246476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:12.249535 containerd[1722]: time="2025-01-14T13:07:12.249454834Z" level=info msg="shim disconnected" id=ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60 namespace=k8s.io Jan 14 13:07:12.249535 containerd[1722]: time="2025-01-14T13:07:12.249515335Z" level=warning msg="cleaning up after shim disconnected" id=ab14f1833f0490926e31775de01b431a3ffb327ca39ffe5bedd4b652181f7d60 namespace=k8s.io Jan 14 13:07:12.249535 containerd[1722]: time="2025-01-14T13:07:12.249527235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:12.426285 systemd[1]: Created slice kubepods-besteffort-poda334eebb_fcba_4d16_8280_bef7ba8849b0.slice - libcontainer container kubepods-besteffort-poda334eebb_fcba_4d16_8280_bef7ba8849b0.slice. Jan 14 13:07:12.431569 containerd[1722]: time="2025-01-14T13:07:12.430956755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:0,}" Jan 14 13:07:12.559893 containerd[1722]: time="2025-01-14T13:07:12.559463927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 14 13:07:12.667965 containerd[1722]: time="2025-01-14T13:07:12.667910953Z" level=error msg="Failed to destroy network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.672859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705-shm.mount: Deactivated successfully. Jan 14 13:07:12.674858 containerd[1722]: time="2025-01-14T13:07:12.673009516Z" level=error msg="encountered an error cleaning up failed sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.675067 containerd[1722]: time="2025-01-14T13:07:12.673371020Z" level=error msg="Failed to destroy network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.681170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125-shm.mount: Deactivated successfully. Jan 14 13:07:12.683307 containerd[1722]: time="2025-01-14T13:07:12.683240141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.683952 kubelet[3426]: E0114 13:07:12.683927 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.684709 kubelet[3426]: E0114 13:07:12.684536 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:12.684709 kubelet[3426]: E0114 13:07:12.684589 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:12.685100 kubelet[3426]: E0114 13:07:12.684898 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:12.685974 containerd[1722]: time="2025-01-14T13:07:12.685262266Z" level=error msg="encountered an error cleaning up failed sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.686367 containerd[1722]: time="2025-01-14T13:07:12.686333879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.687718 kubelet[3426]: E0114 13:07:12.687680 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.687808 kubelet[3426]: E0114 13:07:12.687740 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:12.687808 kubelet[3426]: E0114 13:07:12.687767 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:12.688795 kubelet[3426]: E0114 13:07:12.688771 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:12.694895 containerd[1722]: time="2025-01-14T13:07:12.694826483Z" level=error msg="Failed to destroy network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.695725 containerd[1722]: time="2025-01-14T13:07:12.695193287Z" level=error msg="encountered an error cleaning up failed sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.695725 containerd[1722]: time="2025-01-14T13:07:12.695267488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.696761 kubelet[3426]: E0114 13:07:12.695477 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.696761 kubelet[3426]: E0114 13:07:12.695526 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:12.696761 kubelet[3426]: E0114 13:07:12.695553 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:12.696926 kubelet[3426]: E0114 13:07:12.695609 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:12.699563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6-shm.mount: Deactivated successfully. Jan 14 13:07:12.714760 containerd[1722]: time="2025-01-14T13:07:12.714706126Z" level=error msg="Failed to destroy network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.715736 containerd[1722]: time="2025-01-14T13:07:12.715076330Z" level=error msg="encountered an error cleaning up failed sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.715736 containerd[1722]: time="2025-01-14T13:07:12.715152231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.717960 kubelet[3426]: E0114 13:07:12.715416 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.717960 kubelet[3426]: E0114 13:07:12.715479 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:12.717960 kubelet[3426]: E0114 13:07:12.715516 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:12.718110 kubelet[3426]: E0114 13:07:12.715600 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:12.720419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa-shm.mount: Deactivated successfully. Jan 14 13:07:12.726377 containerd[1722]: time="2025-01-14T13:07:12.726316168Z" level=error msg="Failed to destroy network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.726804 containerd[1722]: time="2025-01-14T13:07:12.726763273Z" level=error msg="encountered an error cleaning up failed sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.726894 containerd[1722]: time="2025-01-14T13:07:12.726850674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.727184 kubelet[3426]: E0114 13:07:12.727161 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.727282 kubelet[3426]: E0114 13:07:12.727253 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:12.727432 kubelet[3426]: E0114 13:07:12.727293 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:12.728196 kubelet[3426]: E0114 13:07:12.728162 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:12.732070 containerd[1722]: time="2025-01-14T13:07:12.732032838Z" level=error msg="Failed to destroy network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.732356 containerd[1722]: time="2025-01-14T13:07:12.732325041Z" level=error msg="encountered an error cleaning up failed sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.732456 containerd[1722]: time="2025-01-14T13:07:12.732417843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.732677 kubelet[3426]: E0114 13:07:12.732656 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:12.732784 kubelet[3426]: E0114 13:07:12.732722 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:12.732784 kubelet[3426]: E0114 13:07:12.732752 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:12.732892 kubelet[3426]: E0114 13:07:12.732874 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:13.556546 kubelet[3426]: I0114 13:07:13.556481 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4" Jan 14 13:07:13.557708 containerd[1722]: time="2025-01-14T13:07:13.557319734Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:13.557708 containerd[1722]: time="2025-01-14T13:07:13.557539536Z" level=info msg="Ensure that sandbox 93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4 in task-service has been cleanup successfully" Jan 14 13:07:13.558256 containerd[1722]: time="2025-01-14T13:07:13.558214045Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:13.558405 containerd[1722]: time="2025-01-14T13:07:13.558316746Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:13.559496 containerd[1722]: time="2025-01-14T13:07:13.558864653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:1,}" Jan 14 13:07:13.560348 kubelet[3426]: I0114 13:07:13.560093 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa" Jan 14 13:07:13.560632 containerd[1722]: time="2025-01-14T13:07:13.560601174Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:13.560865 containerd[1722]: time="2025-01-14T13:07:13.560840277Z" level=info msg="Ensure that sandbox 6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa in task-service has been cleanup successfully" Jan 14 13:07:13.561941 kubelet[3426]: I0114 13:07:13.561892 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4" Jan 14 13:07:13.562161 containerd[1722]: time="2025-01-14T13:07:13.562135193Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:13.562250 containerd[1722]: time="2025-01-14T13:07:13.562160893Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:13.564788 containerd[1722]: time="2025-01-14T13:07:13.562572198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:1,}" Jan 14 13:07:13.564788 containerd[1722]: time="2025-01-14T13:07:13.564015116Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:13.564788 containerd[1722]: time="2025-01-14T13:07:13.564215618Z" level=info msg="Ensure that sandbox 7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125 in task-service has been cleanup successfully" Jan 14 13:07:13.565250 kubelet[3426]: I0114 13:07:13.563533 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125" Jan 14 13:07:13.565321 containerd[1722]: time="2025-01-14T13:07:13.564788825Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:13.565321 containerd[1722]: time="2025-01-14T13:07:13.564806425Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:13.565822 containerd[1722]: time="2025-01-14T13:07:13.565796637Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:13.566031 containerd[1722]: time="2025-01-14T13:07:13.566001740Z" level=info msg="Ensure that sandbox 7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4 in task-service has been cleanup successfully" Jan 14 13:07:13.567718 containerd[1722]: time="2025-01-14T13:07:13.567496158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:1,}" Jan 14 13:07:13.568195 containerd[1722]: time="2025-01-14T13:07:13.568169166Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:13.568266 containerd[1722]: time="2025-01-14T13:07:13.568196367Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:13.568316 kubelet[3426]: I0114 13:07:13.568230 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705" Jan 14 13:07:13.569373 containerd[1722]: time="2025-01-14T13:07:13.568672073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:1,}" Jan 14 13:07:13.569937 containerd[1722]: time="2025-01-14T13:07:13.569597084Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:13.569937 containerd[1722]: time="2025-01-14T13:07:13.569802186Z" level=info msg="Ensure that sandbox b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705 in task-service has been cleanup successfully" Jan 14 13:07:13.570112 containerd[1722]: time="2025-01-14T13:07:13.570091190Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:13.570198 containerd[1722]: time="2025-01-14T13:07:13.570176691Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:13.570751 containerd[1722]: time="2025-01-14T13:07:13.570727498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:1,}" Jan 14 13:07:13.572562 kubelet[3426]: I0114 13:07:13.572063 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6" Jan 14 13:07:13.573385 containerd[1722]: time="2025-01-14T13:07:13.573361230Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:13.573808 containerd[1722]: time="2025-01-14T13:07:13.573668934Z" level=info msg="Ensure that sandbox 893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6 in task-service has been cleanup successfully" Jan 14 13:07:13.574113 containerd[1722]: time="2025-01-14T13:07:13.574029638Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:13.574113 containerd[1722]: time="2025-01-14T13:07:13.574050238Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:13.575327 containerd[1722]: time="2025-01-14T13:07:13.574541844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:1,}" Jan 14 13:07:13.626984 systemd[1]: run-netns-cni\x2d2fc91a85\x2d709d\x2d84ef\x2de342\x2dd08efa71d56e.mount: Deactivated successfully. Jan 14 13:07:13.627144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4-shm.mount: Deactivated successfully. Jan 14 13:07:13.627237 systemd[1]: run-netns-cni\x2d66711d45\x2dbe6e\x2df3d6\x2de2ff\x2d24a38b96067d.mount: Deactivated successfully. Jan 14 13:07:13.627312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4-shm.mount: Deactivated successfully. Jan 14 13:07:13.627389 systemd[1]: run-netns-cni\x2df9b3154e\x2d44bc\x2d73b6\x2dc3b4\x2da1745484ece2.mount: Deactivated successfully. Jan 14 13:07:13.627467 systemd[1]: run-netns-cni\x2da8cdc893\x2d8780\x2d75da\x2dd1f3\x2d39f0f54b4424.mount: Deactivated successfully. Jan 14 13:07:13.627545 systemd[1]: run-netns-cni\x2d307b96ad\x2dbde2\x2d0537\x2df461\x2d5d2a1ca02bdf.mount: Deactivated successfully. Jan 14 13:07:13.627623 systemd[1]: run-netns-cni\x2da01ea21b\x2d607b\x2dcc05\x2d7206\x2d66579b17f9d2.mount: Deactivated successfully. Jan 14 13:07:13.997070 containerd[1722]: time="2025-01-14T13:07:13.996934612Z" level=error msg="Failed to destroy network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:13.998045 containerd[1722]: time="2025-01-14T13:07:13.997493419Z" level=error msg="encountered an error cleaning up failed sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:13.998045 containerd[1722]: time="2025-01-14T13:07:13.997665721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:13.998869 kubelet[3426]: E0114 13:07:13.998411 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:13.998869 kubelet[3426]: E0114 13:07:13.998478 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:13.998869 kubelet[3426]: E0114 13:07:13.998516 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:13.999407 kubelet[3426]: E0114 13:07:13.998585 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:14.101136 containerd[1722]: time="2025-01-14T13:07:14.100545579Z" level=error msg="Failed to destroy network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.101136 containerd[1722]: time="2025-01-14T13:07:14.100940984Z" level=error msg="encountered an error cleaning up failed sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.101136 containerd[1722]: time="2025-01-14T13:07:14.101019385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.101416 kubelet[3426]: E0114 13:07:14.101268 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.101416 kubelet[3426]: E0114 13:07:14.101328 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:14.101416 kubelet[3426]: E0114 13:07:14.101354 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:14.101676 kubelet[3426]: E0114 13:07:14.101419 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:14.109876 containerd[1722]: time="2025-01-14T13:07:14.109337387Z" level=error msg="Failed to destroy network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.109876 containerd[1722]: time="2025-01-14T13:07:14.109680291Z" level=error msg="encountered an error cleaning up failed sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.109876 containerd[1722]: time="2025-01-14T13:07:14.109757392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.110120 kubelet[3426]: E0114 13:07:14.110001 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.111241 kubelet[3426]: E0114 13:07:14.110933 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:14.111241 kubelet[3426]: E0114 13:07:14.110971 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:14.111837 kubelet[3426]: E0114 13:07:14.111766 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:14.117015 containerd[1722]: time="2025-01-14T13:07:14.116899079Z" level=error msg="Failed to destroy network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.117487 containerd[1722]: time="2025-01-14T13:07:14.117316884Z" level=error msg="encountered an error cleaning up failed sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.117487 containerd[1722]: time="2025-01-14T13:07:14.117384985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.117672 kubelet[3426]: E0114 13:07:14.117585 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.117672 kubelet[3426]: E0114 13:07:14.117637 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:14.117672 kubelet[3426]: E0114 13:07:14.117666 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:14.118411 kubelet[3426]: E0114 13:07:14.117849 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:14.127860 containerd[1722]: time="2025-01-14T13:07:14.127528709Z" level=error msg="Failed to destroy network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.128235 containerd[1722]: time="2025-01-14T13:07:14.128064116Z" level=error msg="encountered an error cleaning up failed sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.128235 containerd[1722]: time="2025-01-14T13:07:14.128137017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.128437 kubelet[3426]: E0114 13:07:14.128357 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.128437 kubelet[3426]: E0114 13:07:14.128413 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:14.129009 kubelet[3426]: E0114 13:07:14.128441 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:14.129009 kubelet[3426]: E0114 13:07:14.128511 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:14.132648 containerd[1722]: time="2025-01-14T13:07:14.132528070Z" level=error msg="Failed to destroy network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.133011 containerd[1722]: time="2025-01-14T13:07:14.132985076Z" level=error msg="encountered an error cleaning up failed sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.133566 containerd[1722]: time="2025-01-14T13:07:14.133122878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.133650 kubelet[3426]: E0114 13:07:14.133355 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.133650 kubelet[3426]: E0114 13:07:14.133403 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:14.133650 kubelet[3426]: E0114 13:07:14.133436 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:14.133809 kubelet[3426]: E0114 13:07:14.133511 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:14.575990 kubelet[3426]: I0114 13:07:14.575954 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3" Jan 14 13:07:14.578051 containerd[1722]: time="2025-01-14T13:07:14.576721904Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:14.578051 containerd[1722]: time="2025-01-14T13:07:14.576991808Z" level=info msg="Ensure that sandbox f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3 in task-service has been cleanup successfully" Jan 14 13:07:14.580460 containerd[1722]: time="2025-01-14T13:07:14.579777842Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:14.580460 containerd[1722]: time="2025-01-14T13:07:14.579816642Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:14.580812 containerd[1722]: time="2025-01-14T13:07:14.580776954Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:14.580901 containerd[1722]: time="2025-01-14T13:07:14.580871655Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:14.580901 containerd[1722]: time="2025-01-14T13:07:14.580888355Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:14.581436 containerd[1722]: time="2025-01-14T13:07:14.581398962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:2,}" Jan 14 13:07:14.582057 kubelet[3426]: I0114 13:07:14.581548 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6" Jan 14 13:07:14.582675 containerd[1722]: time="2025-01-14T13:07:14.582647977Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:14.582977 containerd[1722]: time="2025-01-14T13:07:14.582952181Z" level=info msg="Ensure that sandbox 868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6 in task-service has been cleanup successfully" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.583167183Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.583187483Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.583459787Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.583586688Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.583618489Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:14.585160 containerd[1722]: time="2025-01-14T13:07:14.584145495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:2,}" Jan 14 13:07:14.585423 kubelet[3426]: I0114 13:07:14.584834 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898" Jan 14 13:07:14.585482 containerd[1722]: time="2025-01-14T13:07:14.585325710Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:14.587315 containerd[1722]: time="2025-01-14T13:07:14.585602513Z" level=info msg="Ensure that sandbox 2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898 in task-service has been cleanup successfully" Jan 14 13:07:14.587315 containerd[1722]: time="2025-01-14T13:07:14.585862116Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:14.587315 containerd[1722]: time="2025-01-14T13:07:14.585880116Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:14.587930 containerd[1722]: time="2025-01-14T13:07:14.587831640Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:14.588515 containerd[1722]: time="2025-01-14T13:07:14.588396047Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:14.589893 containerd[1722]: time="2025-01-14T13:07:14.588626150Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:14.591051 containerd[1722]: time="2025-01-14T13:07:14.590596274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:2,}" Jan 14 13:07:14.592417 kubelet[3426]: I0114 13:07:14.592371 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a" Jan 14 13:07:14.597729 containerd[1722]: time="2025-01-14T13:07:14.595981340Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:14.597729 containerd[1722]: time="2025-01-14T13:07:14.596199243Z" level=info msg="Ensure that sandbox 440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a in task-service has been cleanup successfully" Jan 14 13:07:14.598020 containerd[1722]: time="2025-01-14T13:07:14.597992065Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:14.598092 containerd[1722]: time="2025-01-14T13:07:14.598021765Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:14.601069 containerd[1722]: time="2025-01-14T13:07:14.601042802Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:14.601161 containerd[1722]: time="2025-01-14T13:07:14.601134503Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:14.601161 containerd[1722]: time="2025-01-14T13:07:14.601149703Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:14.602490 containerd[1722]: time="2025-01-14T13:07:14.602462219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:2,}" Jan 14 13:07:14.608392 kubelet[3426]: I0114 13:07:14.608369 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd" Jan 14 13:07:14.612093 containerd[1722]: time="2025-01-14T13:07:14.611917835Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:14.612300 containerd[1722]: time="2025-01-14T13:07:14.612240339Z" level=info msg="Ensure that sandbox b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd in task-service has been cleanup successfully" Jan 14 13:07:14.613195 containerd[1722]: time="2025-01-14T13:07:14.613094049Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:14.613195 containerd[1722]: time="2025-01-14T13:07:14.613123550Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:14.617213 containerd[1722]: time="2025-01-14T13:07:14.617169899Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:14.618370 containerd[1722]: time="2025-01-14T13:07:14.618327913Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:14.618370 containerd[1722]: time="2025-01-14T13:07:14.618368314Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:14.619399 containerd[1722]: time="2025-01-14T13:07:14.619372726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:2,}" Jan 14 13:07:14.623184 kubelet[3426]: I0114 13:07:14.622572 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592" Jan 14 13:07:14.625118 containerd[1722]: time="2025-01-14T13:07:14.625094696Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:14.625746 containerd[1722]: time="2025-01-14T13:07:14.625721704Z" level=info msg="Ensure that sandbox 88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592 in task-service has been cleanup successfully" Jan 14 13:07:14.626205 containerd[1722]: time="2025-01-14T13:07:14.626183509Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:14.626887 containerd[1722]: time="2025-01-14T13:07:14.626863718Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:14.629732 containerd[1722]: time="2025-01-14T13:07:14.628324836Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:14.629732 containerd[1722]: time="2025-01-14T13:07:14.628415037Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:14.629732 containerd[1722]: time="2025-01-14T13:07:14.628427637Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:14.630404 containerd[1722]: time="2025-01-14T13:07:14.630348960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:2,}" Jan 14 13:07:14.630900 systemd[1]: run-netns-cni\x2dd0d3893c\x2dfd60\x2df088\x2db117\x2d8ea869fbc402.mount: Deactivated successfully. Jan 14 13:07:14.631039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a-shm.mount: Deactivated successfully. Jan 14 13:07:14.631132 systemd[1]: run-netns-cni\x2dfe638f19\x2d2cc3\x2d87e8\x2d0aea\x2d687ae2528c6d.mount: Deactivated successfully. Jan 14 13:07:14.631208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898-shm.mount: Deactivated successfully. Jan 14 13:07:14.631323 systemd[1]: run-netns-cni\x2d341b7c55\x2de0c5\x2dd2c5\x2d1c8e\x2d4e3eb1f06be5.mount: Deactivated successfully. Jan 14 13:07:14.631407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3-shm.mount: Deactivated successfully. Jan 14 13:07:14.909173 containerd[1722]: time="2025-01-14T13:07:14.908198959Z" level=error msg="Failed to destroy network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.910818 containerd[1722]: time="2025-01-14T13:07:14.909974781Z" level=error msg="encountered an error cleaning up failed sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.913383 containerd[1722]: time="2025-01-14T13:07:14.913337222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.913999 kubelet[3426]: E0114 13:07:14.913973 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.914104 kubelet[3426]: E0114 13:07:14.914047 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:14.914104 kubelet[3426]: E0114 13:07:14.914076 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:14.914187 kubelet[3426]: E0114 13:07:14.914149 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:14.986186 containerd[1722]: time="2025-01-14T13:07:14.985954911Z" level=error msg="Failed to destroy network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.987268 containerd[1722]: time="2025-01-14T13:07:14.987071224Z" level=error msg="encountered an error cleaning up failed sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.987585 containerd[1722]: time="2025-01-14T13:07:14.987547130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.989134 kubelet[3426]: E0114 13:07:14.989090 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:14.989269 kubelet[3426]: E0114 13:07:14.989163 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:14.989269 kubelet[3426]: E0114 13:07:14.989194 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:14.989503 kubelet[3426]: E0114 13:07:14.989272 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:15.040396 containerd[1722]: time="2025-01-14T13:07:15.040343876Z" level=error msg="Failed to destroy network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.041816 containerd[1722]: time="2025-01-14T13:07:15.041592991Z" level=error msg="encountered an error cleaning up failed sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.042491 containerd[1722]: time="2025-01-14T13:07:15.042352100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.043951 kubelet[3426]: E0114 13:07:15.042973 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.043951 kubelet[3426]: E0114 13:07:15.043041 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:15.043951 kubelet[3426]: E0114 13:07:15.043070 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:15.044544 kubelet[3426]: E0114 13:07:15.043146 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:15.047904 containerd[1722]: time="2025-01-14T13:07:15.047619365Z" level=error msg="Failed to destroy network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.049103 containerd[1722]: time="2025-01-14T13:07:15.048976581Z" level=error msg="encountered an error cleaning up failed sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.049375 containerd[1722]: time="2025-01-14T13:07:15.049246585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.050213 kubelet[3426]: E0114 13:07:15.049948 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.050213 kubelet[3426]: E0114 13:07:15.050041 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:15.050213 kubelet[3426]: E0114 13:07:15.050069 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:15.050402 kubelet[3426]: E0114 13:07:15.050175 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:15.051793 containerd[1722]: time="2025-01-14T13:07:15.050652702Z" level=error msg="Failed to destroy network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.051793 containerd[1722]: time="2025-01-14T13:07:15.050988706Z" level=error msg="encountered an error cleaning up failed sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.051793 containerd[1722]: time="2025-01-14T13:07:15.051041407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.052150 kubelet[3426]: E0114 13:07:15.051233 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.052150 kubelet[3426]: E0114 13:07:15.051279 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:15.052150 kubelet[3426]: E0114 13:07:15.051308 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:15.052286 kubelet[3426]: E0114 13:07:15.051384 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:15.070775 containerd[1722]: time="2025-01-14T13:07:15.068473520Z" level=error msg="Failed to destroy network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.070775 containerd[1722]: time="2025-01-14T13:07:15.068841625Z" level=error msg="encountered an error cleaning up failed sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.070775 containerd[1722]: time="2025-01-14T13:07:15.068911625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.071072 kubelet[3426]: E0114 13:07:15.069129 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.071072 kubelet[3426]: E0114 13:07:15.069183 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:15.071072 kubelet[3426]: E0114 13:07:15.069217 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:15.071245 kubelet[3426]: E0114 13:07:15.069280 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:15.626366 kubelet[3426]: I0114 13:07:15.625603 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513" Jan 14 13:07:15.627707 containerd[1722]: time="2025-01-14T13:07:15.627651761Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:15.630281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e-shm.mount: Deactivated successfully. Jan 14 13:07:15.630778 containerd[1722]: time="2025-01-14T13:07:15.630357794Z" level=info msg="Ensure that sandbox 88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513 in task-service has been cleanup successfully" Jan 14 13:07:15.630408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba-shm.mount: Deactivated successfully. Jan 14 13:07:15.632996 containerd[1722]: time="2025-01-14T13:07:15.632257117Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:15.632996 containerd[1722]: time="2025-01-14T13:07:15.632279817Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:15.632996 containerd[1722]: time="2025-01-14T13:07:15.632838524Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:15.632996 containerd[1722]: time="2025-01-14T13:07:15.632932725Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:15.632996 containerd[1722]: time="2025-01-14T13:07:15.632947325Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:15.635231 kubelet[3426]: I0114 13:07:15.633403 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba" Jan 14 13:07:15.635309 containerd[1722]: time="2025-01-14T13:07:15.634491544Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:15.635309 containerd[1722]: time="2025-01-14T13:07:15.634592645Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:15.635309 containerd[1722]: time="2025-01-14T13:07:15.634608446Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:15.635502 containerd[1722]: time="2025-01-14T13:07:15.635480756Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:15.636255 containerd[1722]: time="2025-01-14T13:07:15.636229965Z" level=info msg="Ensure that sandbox 2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba in task-service has been cleanup successfully" Jan 14 13:07:15.637480 containerd[1722]: time="2025-01-14T13:07:15.637453380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:3,}" Jan 14 13:07:15.639630 systemd[1]: run-netns-cni\x2d41c876ed\x2dcb73\x2d29b4\x2d2d36\x2dae8042da13d2.mount: Deactivated successfully. Jan 14 13:07:15.640379 containerd[1722]: time="2025-01-14T13:07:15.640102513Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:15.640379 containerd[1722]: time="2025-01-14T13:07:15.640123913Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:15.641138 containerd[1722]: time="2025-01-14T13:07:15.641003924Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:15.641358 containerd[1722]: time="2025-01-14T13:07:15.641276927Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:15.641358 containerd[1722]: time="2025-01-14T13:07:15.641298427Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:15.642673 kubelet[3426]: I0114 13:07:15.641672 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e" Jan 14 13:07:15.642787 containerd[1722]: time="2025-01-14T13:07:15.642320540Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:15.642787 containerd[1722]: time="2025-01-14T13:07:15.642282540Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:15.642787 containerd[1722]: time="2025-01-14T13:07:15.642543943Z" level=info msg="Ensure that sandbox a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e in task-service has been cleanup successfully" Jan 14 13:07:15.642991 containerd[1722]: time="2025-01-14T13:07:15.642970548Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:15.643084 containerd[1722]: time="2025-01-14T13:07:15.643068449Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:15.643899 containerd[1722]: time="2025-01-14T13:07:15.643877159Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:15.644044 containerd[1722]: time="2025-01-14T13:07:15.644023061Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:15.645192 containerd[1722]: time="2025-01-14T13:07:15.645076874Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:15.645192 containerd[1722]: time="2025-01-14T13:07:15.645123874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:3,}" Jan 14 13:07:15.646363 containerd[1722]: time="2025-01-14T13:07:15.646127987Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:15.646363 containerd[1722]: time="2025-01-14T13:07:15.646152187Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:15.647426 systemd[1]: run-netns-cni\x2d3ae5406e\x2d27e3\x2d5e77\x2d9a8f\x2d91cd659b505d.mount: Deactivated successfully. Jan 14 13:07:15.648193 systemd[1]: run-netns-cni\x2d2f238bac\x2dd606\x2da1a7\x2d4207\x2d401966a7e723.mount: Deactivated successfully. Jan 14 13:07:15.650947 containerd[1722]: time="2025-01-14T13:07:15.650565541Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:15.652725 containerd[1722]: time="2025-01-14T13:07:15.652479664Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:15.652725 containerd[1722]: time="2025-01-14T13:07:15.652506265Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:15.654406 containerd[1722]: time="2025-01-14T13:07:15.654377887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:3,}" Jan 14 13:07:15.655017 kubelet[3426]: I0114 13:07:15.654981 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00" Jan 14 13:07:15.658342 containerd[1722]: time="2025-01-14T13:07:15.657855130Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:15.658971 containerd[1722]: time="2025-01-14T13:07:15.658943743Z" level=info msg="Ensure that sandbox 297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00 in task-service has been cleanup successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.660158258Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.660188259Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.661051869Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.661158770Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.661172571Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.662856491Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.662938092Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:15.664590 containerd[1722]: time="2025-01-14T13:07:15.663592900Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:15.666542 systemd[1]: run-netns-cni\x2de1ed5459\x2df154\x2db179\x2d5d17\x2d96a876343607.mount: Deactivated successfully. Jan 14 13:07:15.667546 kubelet[3426]: I0114 13:07:15.666601 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a" Jan 14 13:07:15.668037 containerd[1722]: time="2025-01-14T13:07:15.667993154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:3,}" Jan 14 13:07:15.669504 containerd[1722]: time="2025-01-14T13:07:15.668887665Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:15.669916 containerd[1722]: time="2025-01-14T13:07:15.669831877Z" level=info msg="Ensure that sandbox 79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a in task-service has been cleanup successfully" Jan 14 13:07:15.670581 containerd[1722]: time="2025-01-14T13:07:15.670523685Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:15.670581 containerd[1722]: time="2025-01-14T13:07:15.670580186Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.674894838Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.674994040Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.675008040Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.676282155Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.676682360Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:15.677261 containerd[1722]: time="2025-01-14T13:07:15.676715461Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:15.678260 kubelet[3426]: I0114 13:07:15.675525 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3" Jan 14 13:07:15.675287 systemd[1]: run-netns-cni\x2de6b34cfe\x2de979\x2d5193\x2dec63\x2d075c6ed8fd97.mount: Deactivated successfully. Jan 14 13:07:15.678412 containerd[1722]: time="2025-01-14T13:07:15.678236279Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:15.678456 containerd[1722]: time="2025-01-14T13:07:15.678441882Z" level=info msg="Ensure that sandbox 908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3 in task-service has been cleanup successfully" Jan 14 13:07:15.680297 containerd[1722]: time="2025-01-14T13:07:15.680269004Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:15.680387 containerd[1722]: time="2025-01-14T13:07:15.680295205Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:15.681721 containerd[1722]: time="2025-01-14T13:07:15.680480107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:3,}" Jan 14 13:07:15.681721 containerd[1722]: time="2025-01-14T13:07:15.681177715Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:15.681721 containerd[1722]: time="2025-01-14T13:07:15.681286017Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:15.681721 containerd[1722]: time="2025-01-14T13:07:15.681299317Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:15.681721 containerd[1722]: time="2025-01-14T13:07:15.681649821Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:15.682219 containerd[1722]: time="2025-01-14T13:07:15.682195928Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:15.682219 containerd[1722]: time="2025-01-14T13:07:15.682219028Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:15.683171 containerd[1722]: time="2025-01-14T13:07:15.683142639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:3,}" Jan 14 13:07:15.868632 containerd[1722]: time="2025-01-14T13:07:15.868579708Z" level=error msg="Failed to destroy network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.868940 containerd[1722]: time="2025-01-14T13:07:15.868908312Z" level=error msg="encountered an error cleaning up failed sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.869019 containerd[1722]: time="2025-01-14T13:07:15.868975213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.869298 kubelet[3426]: E0114 13:07:15.869264 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:15.869392 kubelet[3426]: E0114 13:07:15.869329 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:15.869392 kubelet[3426]: E0114 13:07:15.869357 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:15.869552 kubelet[3426]: E0114 13:07:15.869518 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:16.001184 containerd[1722]: time="2025-01-14T13:07:16.001124229Z" level=error msg="Failed to destroy network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.002135 containerd[1722]: time="2025-01-14T13:07:16.002091441Z" level=error msg="encountered an error cleaning up failed sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.002227 containerd[1722]: time="2025-01-14T13:07:16.002183842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.002571 kubelet[3426]: E0114 13:07:16.002456 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.002571 kubelet[3426]: E0114 13:07:16.002517 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:16.002571 kubelet[3426]: E0114 13:07:16.002550 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:16.002885 kubelet[3426]: E0114 13:07:16.002619 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:16.071108 containerd[1722]: time="2025-01-14T13:07:16.071026584Z" level=error msg="Failed to destroy network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.071641 containerd[1722]: time="2025-01-14T13:07:16.071387589Z" level=error msg="encountered an error cleaning up failed sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.071641 containerd[1722]: time="2025-01-14T13:07:16.071501890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.071845 kubelet[3426]: E0114 13:07:16.071821 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.074534 kubelet[3426]: E0114 13:07:16.071884 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:16.074534 kubelet[3426]: E0114 13:07:16.071915 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:16.074534 kubelet[3426]: E0114 13:07:16.071991 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:16.088166 containerd[1722]: time="2025-01-14T13:07:16.088114993Z" level=error msg="Failed to destroy network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.088507 containerd[1722]: time="2025-01-14T13:07:16.088469298Z" level=error msg="encountered an error cleaning up failed sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.088611 containerd[1722]: time="2025-01-14T13:07:16.088549899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.089035 kubelet[3426]: E0114 13:07:16.089004 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.089151 kubelet[3426]: E0114 13:07:16.089072 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:16.089151 kubelet[3426]: E0114 13:07:16.089100 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:16.089252 kubelet[3426]: E0114 13:07:16.089166 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:16.115273 containerd[1722]: time="2025-01-14T13:07:16.115212825Z" level=error msg="Failed to destroy network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.116020 containerd[1722]: time="2025-01-14T13:07:16.115810632Z" level=error msg="encountered an error cleaning up failed sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.116020 containerd[1722]: time="2025-01-14T13:07:16.115899933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.116270 kubelet[3426]: E0114 13:07:16.116160 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.116270 kubelet[3426]: E0114 13:07:16.116221 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:16.116270 kubelet[3426]: E0114 13:07:16.116253 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:16.116461 kubelet[3426]: E0114 13:07:16.116321 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:16.124665 containerd[1722]: time="2025-01-14T13:07:16.124621340Z" level=error msg="Failed to destroy network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.125181 containerd[1722]: time="2025-01-14T13:07:16.125143846Z" level=error msg="encountered an error cleaning up failed sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.126163 containerd[1722]: time="2025-01-14T13:07:16.125332249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.126253 kubelet[3426]: E0114 13:07:16.125558 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:16.126253 kubelet[3426]: E0114 13:07:16.125654 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:16.126253 kubelet[3426]: E0114 13:07:16.125727 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:16.126384 kubelet[3426]: E0114 13:07:16.125805 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:16.628765 systemd[1]: run-netns-cni\x2d046373a8\x2d720d\x2dadcb\x2de25e\x2d968ec667be78.mount: Deactivated successfully. Jan 14 13:07:16.680453 kubelet[3426]: I0114 13:07:16.680417 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60" Jan 14 13:07:16.681668 containerd[1722]: time="2025-01-14T13:07:16.681631254Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:16.682164 containerd[1722]: time="2025-01-14T13:07:16.681927058Z" level=info msg="Ensure that sandbox c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60 in task-service has been cleanup successfully" Jan 14 13:07:16.683046 containerd[1722]: time="2025-01-14T13:07:16.682974971Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:16.683046 containerd[1722]: time="2025-01-14T13:07:16.683008171Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.683828881Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.683915482Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.683928382Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.684488789Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.684573390Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:16.685196 containerd[1722]: time="2025-01-14T13:07:16.684586390Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:16.685534 kubelet[3426]: I0114 13:07:16.684724 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1" Jan 14 13:07:16.687824 containerd[1722]: time="2025-01-14T13:07:16.685614103Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:16.687824 containerd[1722]: time="2025-01-14T13:07:16.686052908Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:16.687824 containerd[1722]: time="2025-01-14T13:07:16.686066908Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:16.688128 containerd[1722]: time="2025-01-14T13:07:16.687997832Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:16.689727 containerd[1722]: time="2025-01-14T13:07:16.688213435Z" level=info msg="Ensure that sandbox 32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1 in task-service has been cleanup successfully" Jan 14 13:07:16.689727 containerd[1722]: time="2025-01-14T13:07:16.688880443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:4,}" Jan 14 13:07:16.689727 containerd[1722]: time="2025-01-14T13:07:16.689402149Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:16.689727 containerd[1722]: time="2025-01-14T13:07:16.689419549Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:16.690772 containerd[1722]: time="2025-01-14T13:07:16.690024257Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:16.690772 containerd[1722]: time="2025-01-14T13:07:16.690117058Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:16.690772 containerd[1722]: time="2025-01-14T13:07:16.690133758Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:16.690610 systemd[1]: run-netns-cni\x2d4b530ae1\x2dc4c1\x2dbac1\x2d44cf\x2d0818021db82f.mount: Deactivated successfully. Jan 14 13:07:16.692756 containerd[1722]: time="2025-01-14T13:07:16.692734590Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:16.692845 containerd[1722]: time="2025-01-14T13:07:16.692823691Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:16.692845 containerd[1722]: time="2025-01-14T13:07:16.692839291Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:16.694922 kubelet[3426]: I0114 13:07:16.693051 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db" Jan 14 13:07:16.695071 containerd[1722]: time="2025-01-14T13:07:16.694524912Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:16.695071 containerd[1722]: time="2025-01-14T13:07:16.694779915Z" level=info msg="Ensure that sandbox ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db in task-service has been cleanup successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.695413823Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.695510024Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.695523924Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.696110431Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.696130931Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.696499236Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.696582637Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:16.696804 containerd[1722]: time="2025-01-14T13:07:16.696595537Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:16.699682 containerd[1722]: time="2025-01-14T13:07:16.696681338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:4,}" Jan 14 13:07:16.699682 containerd[1722]: time="2025-01-14T13:07:16.698945166Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:16.699682 containerd[1722]: time="2025-01-14T13:07:16.699030367Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:16.699682 containerd[1722]: time="2025-01-14T13:07:16.699044567Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:16.701126 containerd[1722]: time="2025-01-14T13:07:16.701088892Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:16.702605 kubelet[3426]: I0114 13:07:16.701451 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447" Jan 14 13:07:16.702682 containerd[1722]: time="2025-01-14T13:07:16.702416908Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:16.702682 containerd[1722]: time="2025-01-14T13:07:16.702536710Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:16.701570 systemd[1]: run-netns-cni\x2d09cbedc7\x2d66e8\x2df01b\x2dfa42\x2d1217c0949346.mount: Deactivated successfully. Jan 14 13:07:16.701818 systemd[1]: run-netns-cni\x2dd342c675\x2dc555\x2d963b\x2d73ab\x2d9cadde9f2d27.mount: Deactivated successfully. Jan 14 13:07:16.706472 containerd[1722]: time="2025-01-14T13:07:16.705413445Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:16.706472 containerd[1722]: time="2025-01-14T13:07:16.705648148Z" level=info msg="Ensure that sandbox 72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447 in task-service has been cleanup successfully" Jan 14 13:07:16.707455 containerd[1722]: time="2025-01-14T13:07:16.706624460Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:16.707455 containerd[1722]: time="2025-01-14T13:07:16.706648260Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:16.709148 containerd[1722]: time="2025-01-14T13:07:16.707940276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:4,}" Jan 14 13:07:16.709431 containerd[1722]: time="2025-01-14T13:07:16.709403594Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:16.709869 containerd[1722]: time="2025-01-14T13:07:16.709621396Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:16.709869 containerd[1722]: time="2025-01-14T13:07:16.709641497Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:16.710833 containerd[1722]: time="2025-01-14T13:07:16.710634909Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:16.711314 containerd[1722]: time="2025-01-14T13:07:16.711195816Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:16.711314 containerd[1722]: time="2025-01-14T13:07:16.711216116Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:16.712471 containerd[1722]: time="2025-01-14T13:07:16.712210728Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:16.713041 containerd[1722]: time="2025-01-14T13:07:16.712890736Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:16.713041 containerd[1722]: time="2025-01-14T13:07:16.712912837Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:16.713669 kubelet[3426]: I0114 13:07:16.713440 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833" Jan 14 13:07:16.713840 systemd[1]: run-netns-cni\x2d2c67987e\x2d910f\x2d669d\x2ddea6\x2dfba9ae04f940.mount: Deactivated successfully. Jan 14 13:07:16.716112 containerd[1722]: time="2025-01-14T13:07:16.715984674Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:16.716394 containerd[1722]: time="2025-01-14T13:07:16.716371479Z" level=info msg="Ensure that sandbox e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833 in task-service has been cleanup successfully" Jan 14 13:07:16.716750 containerd[1722]: time="2025-01-14T13:07:16.716726183Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:16.717217 containerd[1722]: time="2025-01-14T13:07:16.716829385Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:16.717217 containerd[1722]: time="2025-01-14T13:07:16.716930086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:4,}" Jan 14 13:07:16.717898 containerd[1722]: time="2025-01-14T13:07:16.717863697Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:16.718109 containerd[1722]: time="2025-01-14T13:07:16.718089700Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:16.718306 containerd[1722]: time="2025-01-14T13:07:16.718191301Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:16.719054 containerd[1722]: time="2025-01-14T13:07:16.719031812Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:16.719298 containerd[1722]: time="2025-01-14T13:07:16.719223414Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:16.719298 containerd[1722]: time="2025-01-14T13:07:16.719243814Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:16.719788 containerd[1722]: time="2025-01-14T13:07:16.719625619Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:16.719788 containerd[1722]: time="2025-01-14T13:07:16.719724520Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:16.719788 containerd[1722]: time="2025-01-14T13:07:16.719740520Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:16.720209 containerd[1722]: time="2025-01-14T13:07:16.720159225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:4,}" Jan 14 13:07:16.720406 kubelet[3426]: I0114 13:07:16.720313 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3" Jan 14 13:07:16.721026 containerd[1722]: time="2025-01-14T13:07:16.720943135Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:16.721268 containerd[1722]: time="2025-01-14T13:07:16.721214338Z" level=info msg="Ensure that sandbox 52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3 in task-service has been cleanup successfully" Jan 14 13:07:16.721599 containerd[1722]: time="2025-01-14T13:07:16.721566743Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:16.721811 containerd[1722]: time="2025-01-14T13:07:16.721710944Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:16.722215 containerd[1722]: time="2025-01-14T13:07:16.722106549Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:16.722493 containerd[1722]: time="2025-01-14T13:07:16.722474654Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:16.722662 containerd[1722]: time="2025-01-14T13:07:16.722548555Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:16.722847 containerd[1722]: time="2025-01-14T13:07:16.722823558Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:16.722923 containerd[1722]: time="2025-01-14T13:07:16.722909359Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:16.722966 containerd[1722]: time="2025-01-14T13:07:16.722923759Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:16.723572 containerd[1722]: time="2025-01-14T13:07:16.723400665Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:16.723572 containerd[1722]: time="2025-01-14T13:07:16.723502266Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:16.723572 containerd[1722]: time="2025-01-14T13:07:16.723518066Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:16.724200 containerd[1722]: time="2025-01-14T13:07:16.724176675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:4,}" Jan 14 13:07:17.627219 systemd[1]: run-netns-cni\x2d83967d5f\x2df2d8\x2d2043\x2dbfad\x2d39c1da68b255.mount: Deactivated successfully. Jan 14 13:07:17.627345 systemd[1]: run-netns-cni\x2d62a856da\x2d23b5\x2d48c0\x2db656\x2d447ce5eb6127.mount: Deactivated successfully. Jan 14 13:07:22.927140 containerd[1722]: time="2025-01-14T13:07:22.926991051Z" level=error msg="Failed to destroy network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:22.930786 containerd[1722]: time="2025-01-14T13:07:22.930072385Z" level=error msg="encountered an error cleaning up failed sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:22.931818 containerd[1722]: time="2025-01-14T13:07:22.931001495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:22.933546 kubelet[3426]: E0114 13:07:22.932146 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:22.933546 kubelet[3426]: E0114 13:07:22.932226 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:22.933546 kubelet[3426]: E0114 13:07:22.932258 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:22.932474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254-shm.mount: Deactivated successfully. Jan 14 13:07:22.934235 kubelet[3426]: E0114 13:07:22.932328 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:23.198650 containerd[1722]: time="2025-01-14T13:07:23.197296330Z" level=error msg="Failed to destroy network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.198650 containerd[1722]: time="2025-01-14T13:07:23.197905437Z" level=error msg="encountered an error cleaning up failed sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.198650 containerd[1722]: time="2025-01-14T13:07:23.198001538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.199489 kubelet[3426]: E0114 13:07:23.198262 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.199489 kubelet[3426]: E0114 13:07:23.198323 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:23.199489 kubelet[3426]: E0114 13:07:23.198350 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:23.199781 kubelet[3426]: E0114 13:07:23.198418 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:23.306394 containerd[1722]: time="2025-01-14T13:07:23.306342932Z" level=error msg="Failed to destroy network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.307148 containerd[1722]: time="2025-01-14T13:07:23.306960739Z" level=error msg="encountered an error cleaning up failed sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.307148 containerd[1722]: time="2025-01-14T13:07:23.307040240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.307347 kubelet[3426]: E0114 13:07:23.307308 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.307410 kubelet[3426]: E0114 13:07:23.307369 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:23.307410 kubelet[3426]: E0114 13:07:23.307397 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:23.307495 kubelet[3426]: E0114 13:07:23.307464 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:23.320837 containerd[1722]: time="2025-01-14T13:07:23.320783391Z" level=error msg="Failed to destroy network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.321267 containerd[1722]: time="2025-01-14T13:07:23.321146195Z" level=error msg="encountered an error cleaning up failed sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.321267 containerd[1722]: time="2025-01-14T13:07:23.321225996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.321522 kubelet[3426]: E0114 13:07:23.321469 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.321592 kubelet[3426]: E0114 13:07:23.321555 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:23.321592 kubelet[3426]: E0114 13:07:23.321582 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:23.321673 kubelet[3426]: E0114 13:07:23.321651 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:23.378965 containerd[1722]: time="2025-01-14T13:07:23.378739630Z" level=error msg="Failed to destroy network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.379525 containerd[1722]: time="2025-01-14T13:07:23.379485438Z" level=error msg="encountered an error cleaning up failed sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.379886 containerd[1722]: time="2025-01-14T13:07:23.379775741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.381489 kubelet[3426]: E0114 13:07:23.381015 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.381489 kubelet[3426]: E0114 13:07:23.381076 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:23.381489 kubelet[3426]: E0114 13:07:23.381103 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:23.381682 kubelet[3426]: E0114 13:07:23.381175 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:23.415615 containerd[1722]: time="2025-01-14T13:07:23.415145231Z" level=error msg="Failed to destroy network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.415615 containerd[1722]: time="2025-01-14T13:07:23.415482035Z" level=error msg="encountered an error cleaning up failed sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.415615 containerd[1722]: time="2025-01-14T13:07:23.415549536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.416143 kubelet[3426]: E0114 13:07:23.416118 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:23.416720 kubelet[3426]: E0114 13:07:23.416303 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:23.416720 kubelet[3426]: E0114 13:07:23.416339 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:23.416720 kubelet[3426]: E0114 13:07:23.416415 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:23.499170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba-shm.mount: Deactivated successfully. Jan 14 13:07:23.739918 kubelet[3426]: I0114 13:07:23.739120 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254" Jan 14 13:07:23.743720 containerd[1722]: time="2025-01-14T13:07:23.741049123Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:23.743720 containerd[1722]: time="2025-01-14T13:07:23.741290726Z" level=info msg="Ensure that sandbox dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254 in task-service has been cleanup successfully" Jan 14 13:07:23.744325 containerd[1722]: time="2025-01-14T13:07:23.744296559Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:23.744515 containerd[1722]: time="2025-01-14T13:07:23.744441861Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:23.745356 containerd[1722]: time="2025-01-14T13:07:23.745313870Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:23.745438 containerd[1722]: time="2025-01-14T13:07:23.745405971Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:23.745438 containerd[1722]: time="2025-01-14T13:07:23.745420072Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:23.746314 systemd[1]: run-netns-cni\x2d4b456a11\x2d210e\x2d004b\x2d332c\x2d02ffcd105629.mount: Deactivated successfully. Jan 14 13:07:23.749006 containerd[1722]: time="2025-01-14T13:07:23.748562006Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:23.749006 containerd[1722]: time="2025-01-14T13:07:23.748680707Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:23.749006 containerd[1722]: time="2025-01-14T13:07:23.748728708Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:23.749837 containerd[1722]: time="2025-01-14T13:07:23.749560417Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:23.750265 containerd[1722]: time="2025-01-14T13:07:23.750191324Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:23.750265 containerd[1722]: time="2025-01-14T13:07:23.750212724Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:23.752169 containerd[1722]: time="2025-01-14T13:07:23.752144046Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:23.752425 containerd[1722]: time="2025-01-14T13:07:23.752327748Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:23.752425 containerd[1722]: time="2025-01-14T13:07:23.752345748Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:23.753683 containerd[1722]: time="2025-01-14T13:07:23.753321759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:5,}" Jan 14 13:07:23.755232 kubelet[3426]: I0114 13:07:23.755211 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba" Jan 14 13:07:23.759174 containerd[1722]: time="2025-01-14T13:07:23.756785897Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:23.759174 containerd[1722]: time="2025-01-14T13:07:23.757013299Z" level=info msg="Ensure that sandbox 13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba in task-service has been cleanup successfully" Jan 14 13:07:23.760328 containerd[1722]: time="2025-01-14T13:07:23.760299036Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:23.760415 containerd[1722]: time="2025-01-14T13:07:23.760327236Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:23.762432 systemd[1]: run-netns-cni\x2d77f6cfeb\x2d7d3b\x2ddd73\x2d0f0d\x2d161ceb030082.mount: Deactivated successfully. Jan 14 13:07:23.765357 containerd[1722]: time="2025-01-14T13:07:23.765307191Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:23.765640 containerd[1722]: time="2025-01-14T13:07:23.765583694Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:23.765640 containerd[1722]: time="2025-01-14T13:07:23.765603794Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:23.766835 containerd[1722]: time="2025-01-14T13:07:23.766449503Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:23.766835 containerd[1722]: time="2025-01-14T13:07:23.766721606Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:23.766835 containerd[1722]: time="2025-01-14T13:07:23.766736806Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:23.767895 containerd[1722]: time="2025-01-14T13:07:23.767711617Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:23.767895 containerd[1722]: time="2025-01-14T13:07:23.767809318Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:23.767895 containerd[1722]: time="2025-01-14T13:07:23.767823418Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:23.768517 kubelet[3426]: I0114 13:07:23.768495 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad" Jan 14 13:07:23.769193 containerd[1722]: time="2025-01-14T13:07:23.769006031Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:23.769626 containerd[1722]: time="2025-01-14T13:07:23.769605338Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:23.770375 containerd[1722]: time="2025-01-14T13:07:23.770345946Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:23.770831 containerd[1722]: time="2025-01-14T13:07:23.770811051Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:23.771181 containerd[1722]: time="2025-01-14T13:07:23.771159355Z" level=info msg="Ensure that sandbox 60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad in task-service has been cleanup successfully" Jan 14 13:07:23.774891 containerd[1722]: time="2025-01-14T13:07:23.774863696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:5,}" Jan 14 13:07:23.775182 systemd[1]: run-netns-cni\x2d84d5735d\x2d5caf\x2d7a04\x2d1e7d\x2ded4f70df2d5e.mount: Deactivated successfully. Jan 14 13:07:23.776424 containerd[1722]: time="2025-01-14T13:07:23.775504803Z" level=info msg="TearDown network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" successfully" Jan 14 13:07:23.776424 containerd[1722]: time="2025-01-14T13:07:23.775529003Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" returns successfully" Jan 14 13:07:23.777776 containerd[1722]: time="2025-01-14T13:07:23.777597426Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:23.778312 containerd[1722]: time="2025-01-14T13:07:23.778216233Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:23.778312 containerd[1722]: time="2025-01-14T13:07:23.778260033Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:23.779051 containerd[1722]: time="2025-01-14T13:07:23.778900341Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:23.779299 containerd[1722]: time="2025-01-14T13:07:23.779086243Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:23.779359 containerd[1722]: time="2025-01-14T13:07:23.779300545Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:23.780991 containerd[1722]: time="2025-01-14T13:07:23.780650860Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:23.780991 containerd[1722]: time="2025-01-14T13:07:23.780784161Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:23.780991 containerd[1722]: time="2025-01-14T13:07:23.780798961Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:23.781426 containerd[1722]: time="2025-01-14T13:07:23.781402168Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:23.781967 containerd[1722]: time="2025-01-14T13:07:23.781893274Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:23.781967 containerd[1722]: time="2025-01-14T13:07:23.781913174Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:23.782178 kubelet[3426]: I0114 13:07:23.781914 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3" Jan 14 13:07:23.783174 containerd[1722]: time="2025-01-14T13:07:23.783055486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:5,}" Jan 14 13:07:23.785031 containerd[1722]: time="2025-01-14T13:07:23.785010908Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:23.785322 containerd[1722]: time="2025-01-14T13:07:23.785291011Z" level=info msg="Ensure that sandbox 35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3 in task-service has been cleanup successfully" Jan 14 13:07:23.785626 containerd[1722]: time="2025-01-14T13:07:23.785604714Z" level=info msg="TearDown network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" successfully" Jan 14 13:07:23.786948 containerd[1722]: time="2025-01-14T13:07:23.786926729Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" returns successfully" Jan 14 13:07:23.788058 kubelet[3426]: I0114 13:07:23.788031 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90" Jan 14 13:07:23.791720 containerd[1722]: time="2025-01-14T13:07:23.788481646Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:23.791720 containerd[1722]: time="2025-01-14T13:07:23.791378378Z" level=info msg="Ensure that sandbox 94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90 in task-service has been cleanup successfully" Jan 14 13:07:23.792823 systemd[1]: run-netns-cni\x2d23673862\x2d5b9a\x2de602\x2d5c8f\x2db9694ac626e9.mount: Deactivated successfully. Jan 14 13:07:23.793213 containerd[1722]: time="2025-01-14T13:07:23.793019296Z" level=info msg="TearDown network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" successfully" Jan 14 13:07:23.793213 containerd[1722]: time="2025-01-14T13:07:23.793055097Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" returns successfully" Jan 14 13:07:23.793480 containerd[1722]: time="2025-01-14T13:07:23.793419201Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:23.794724 containerd[1722]: time="2025-01-14T13:07:23.793784405Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:23.794724 containerd[1722]: time="2025-01-14T13:07:23.793805305Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795305521Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795406222Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795420723Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795546224Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795624525Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:23.796139 containerd[1722]: time="2025-01-14T13:07:23.795636825Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:23.797680 containerd[1722]: time="2025-01-14T13:07:23.797657947Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:23.797935 containerd[1722]: time="2025-01-14T13:07:23.797913850Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:23.798380 containerd[1722]: time="2025-01-14T13:07:23.798264954Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:23.798838 containerd[1722]: time="2025-01-14T13:07:23.798817960Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:23.799172 containerd[1722]: time="2025-01-14T13:07:23.799014762Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:23.799172 containerd[1722]: time="2025-01-14T13:07:23.799033162Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:23.799559 containerd[1722]: time="2025-01-14T13:07:23.797787849Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:23.799879 containerd[1722]: time="2025-01-14T13:07:23.799804971Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:23.799879 containerd[1722]: time="2025-01-14T13:07:23.799827271Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:23.801231 containerd[1722]: time="2025-01-14T13:07:23.800981284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:5,}" Jan 14 13:07:23.802016 containerd[1722]: time="2025-01-14T13:07:23.801992895Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:23.802383 containerd[1722]: time="2025-01-14T13:07:23.802346899Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:23.802471 kubelet[3426]: I0114 13:07:23.802389 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15" Jan 14 13:07:23.802610 containerd[1722]: time="2025-01-14T13:07:23.802559801Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:23.804723 containerd[1722]: time="2025-01-14T13:07:23.804284620Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:23.804723 containerd[1722]: time="2025-01-14T13:07:23.804508223Z" level=info msg="Ensure that sandbox 00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15 in task-service has been cleanup successfully" Jan 14 13:07:23.804723 containerd[1722]: time="2025-01-14T13:07:23.804605724Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:23.804938 containerd[1722]: time="2025-01-14T13:07:23.804681925Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:23.804999 containerd[1722]: time="2025-01-14T13:07:23.804937528Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:23.805196 containerd[1722]: time="2025-01-14T13:07:23.805076829Z" level=info msg="TearDown network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" successfully" Jan 14 13:07:23.805196 containerd[1722]: time="2025-01-14T13:07:23.805096029Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" returns successfully" Jan 14 13:07:23.805684 containerd[1722]: time="2025-01-14T13:07:23.805661836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:5,}" Jan 14 13:07:23.806719 containerd[1722]: time="2025-01-14T13:07:23.805827737Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:23.806719 containerd[1722]: time="2025-01-14T13:07:23.806304343Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:23.806719 containerd[1722]: time="2025-01-14T13:07:23.806349143Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:23.806719 containerd[1722]: time="2025-01-14T13:07:23.806643846Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:23.806914 containerd[1722]: time="2025-01-14T13:07:23.806792848Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:23.806914 containerd[1722]: time="2025-01-14T13:07:23.806865649Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:23.808031 containerd[1722]: time="2025-01-14T13:07:23.807436255Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:23.808031 containerd[1722]: time="2025-01-14T13:07:23.807545856Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:23.808031 containerd[1722]: time="2025-01-14T13:07:23.807560956Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:23.808862 containerd[1722]: time="2025-01-14T13:07:23.808535967Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:23.809157 containerd[1722]: time="2025-01-14T13:07:23.808968272Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:23.809157 containerd[1722]: time="2025-01-14T13:07:23.808989272Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:23.810821 containerd[1722]: time="2025-01-14T13:07:23.810786392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:5,}" Jan 14 13:07:24.496208 systemd[1]: run-netns-cni\x2d429b486d\x2d4984\x2d6b9b\x2d5f07\x2d1416245c47c6.mount: Deactivated successfully. Jan 14 13:07:24.496761 systemd[1]: run-netns-cni\x2d19e6061b\x2d1511\x2df151\x2d2fd8\x2de079d7cf2241.mount: Deactivated successfully. Jan 14 13:07:25.426067 containerd[1722]: time="2025-01-14T13:07:25.426004795Z" level=error msg="Failed to destroy network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.426521 containerd[1722]: time="2025-01-14T13:07:25.426383299Z" level=error msg="encountered an error cleaning up failed sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.426521 containerd[1722]: time="2025-01-14T13:07:25.426458100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.426785 kubelet[3426]: E0114 13:07:25.426755 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.427119 kubelet[3426]: E0114 13:07:25.426828 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:25.427119 kubelet[3426]: E0114 13:07:25.426860 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:25.427119 kubelet[3426]: E0114 13:07:25.426939 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:25.498632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5-shm.mount: Deactivated successfully. Jan 14 13:07:25.499176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329945739.mount: Deactivated successfully. Jan 14 13:07:25.533939 containerd[1722]: time="2025-01-14T13:07:25.533882784Z" level=error msg="Failed to destroy network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.534599 containerd[1722]: time="2025-01-14T13:07:25.534236988Z" level=error msg="encountered an error cleaning up failed sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.534599 containerd[1722]: time="2025-01-14T13:07:25.534314489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.536979 kubelet[3426]: E0114 13:07:25.536538 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.536979 kubelet[3426]: E0114 13:07:25.536610 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:25.536979 kubelet[3426]: E0114 13:07:25.536642 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:25.537177 kubelet[3426]: E0114 13:07:25.536719 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:25.537365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5-shm.mount: Deactivated successfully. Jan 14 13:07:25.682304 containerd[1722]: time="2025-01-14T13:07:25.682188819Z" level=error msg="Failed to destroy network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.686806 containerd[1722]: time="2025-01-14T13:07:25.683414533Z" level=error msg="encountered an error cleaning up failed sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.686806 containerd[1722]: time="2025-01-14T13:07:25.683500533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.686980 kubelet[3426]: E0114 13:07:25.686136 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.686980 kubelet[3426]: E0114 13:07:25.686287 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:25.686980 kubelet[3426]: E0114 13:07:25.686383 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:25.687126 kubelet[3426]: E0114 13:07:25.686509 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:25.687511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0-shm.mount: Deactivated successfully. Jan 14 13:07:25.812314 kubelet[3426]: I0114 13:07:25.812270 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5" Jan 14 13:07:25.814638 containerd[1722]: time="2025-01-14T13:07:25.814589178Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:25.816649 containerd[1722]: time="2025-01-14T13:07:25.816555500Z" level=info msg="Ensure that sandbox 319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5 in task-service has been cleanup successfully" Jan 14 13:07:25.817013 containerd[1722]: time="2025-01-14T13:07:25.816880104Z" level=info msg="TearDown network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" successfully" Jan 14 13:07:25.817013 containerd[1722]: time="2025-01-14T13:07:25.816903804Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" returns successfully" Jan 14 13:07:25.818268 containerd[1722]: time="2025-01-14T13:07:25.818051817Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:25.818268 containerd[1722]: time="2025-01-14T13:07:25.818141017Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:25.818268 containerd[1722]: time="2025-01-14T13:07:25.818156418Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:25.820712 containerd[1722]: time="2025-01-14T13:07:25.818796025Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:25.820712 containerd[1722]: time="2025-01-14T13:07:25.818881926Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:25.820712 containerd[1722]: time="2025-01-14T13:07:25.818895726Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:25.822167 containerd[1722]: time="2025-01-14T13:07:25.822106761Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:25.822281 containerd[1722]: time="2025-01-14T13:07:25.822262563Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:25.822336 containerd[1722]: time="2025-01-14T13:07:25.822282263Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:25.823257 containerd[1722]: time="2025-01-14T13:07:25.823233474Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:25.823351 containerd[1722]: time="2025-01-14T13:07:25.823322875Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:25.823351 containerd[1722]: time="2025-01-14T13:07:25.823343575Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:25.823742 kubelet[3426]: I0114 13:07:25.823719 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0" Jan 14 13:07:25.824645 containerd[1722]: time="2025-01-14T13:07:25.824621289Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:25.825362 containerd[1722]: time="2025-01-14T13:07:25.824891992Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:25.825362 containerd[1722]: time="2025-01-14T13:07:25.825111494Z" level=info msg="Ensure that sandbox 6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0 in task-service has been cleanup successfully" Jan 14 13:07:25.825362 containerd[1722]: time="2025-01-14T13:07:25.825311997Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:25.825362 containerd[1722]: time="2025-01-14T13:07:25.825329097Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:25.825564 containerd[1722]: time="2025-01-14T13:07:25.825421898Z" level=info msg="TearDown network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" successfully" Jan 14 13:07:25.825564 containerd[1722]: time="2025-01-14T13:07:25.825436398Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" returns successfully" Jan 14 13:07:25.826377 containerd[1722]: time="2025-01-14T13:07:25.826305407Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:25.826665 containerd[1722]: time="2025-01-14T13:07:25.826424509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:6,}" Jan 14 13:07:25.826872 containerd[1722]: time="2025-01-14T13:07:25.826493510Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:25.826872 containerd[1722]: time="2025-01-14T13:07:25.826732112Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:25.827377 containerd[1722]: time="2025-01-14T13:07:25.827340719Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:25.827468 containerd[1722]: time="2025-01-14T13:07:25.827432920Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:25.827468 containerd[1722]: time="2025-01-14T13:07:25.827450420Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:25.828728 containerd[1722]: time="2025-01-14T13:07:25.828210628Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:25.828728 containerd[1722]: time="2025-01-14T13:07:25.828317130Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:25.828728 containerd[1722]: time="2025-01-14T13:07:25.828331930Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:25.828903 containerd[1722]: time="2025-01-14T13:07:25.828826935Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:25.828944 containerd[1722]: time="2025-01-14T13:07:25.828914436Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:25.828944 containerd[1722]: time="2025-01-14T13:07:25.828928836Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:25.829399 containerd[1722]: time="2025-01-14T13:07:25.829325641Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:25.829736 containerd[1722]: time="2025-01-14T13:07:25.829576344Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:25.829736 containerd[1722]: time="2025-01-14T13:07:25.829611644Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.830416553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:6,}" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831041160Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831267862Z" level=info msg="Ensure that sandbox 401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5 in task-service has been cleanup successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831445464Z" level=info msg="TearDown network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831462864Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" returns successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831711467Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831791168Z" level=info msg="TearDown network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.831803268Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" returns successfully" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.832024871Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:25.832107 containerd[1722]: time="2025-01-14T13:07:25.832108071Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:25.832522 kubelet[3426]: I0114 13:07:25.830532 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5" Jan 14 13:07:25.832590 containerd[1722]: time="2025-01-14T13:07:25.832121672Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:25.832590 containerd[1722]: time="2025-01-14T13:07:25.832330474Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:25.832590 containerd[1722]: time="2025-01-14T13:07:25.832412875Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:25.832590 containerd[1722]: time="2025-01-14T13:07:25.832426775Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:25.833113 containerd[1722]: time="2025-01-14T13:07:25.832646577Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:25.833113 containerd[1722]: time="2025-01-14T13:07:25.832747778Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:25.833113 containerd[1722]: time="2025-01-14T13:07:25.832762079Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:25.833113 containerd[1722]: time="2025-01-14T13:07:25.833057682Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:25.833283 containerd[1722]: time="2025-01-14T13:07:25.833188283Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:25.833283 containerd[1722]: time="2025-01-14T13:07:25.833258484Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:25.833794 containerd[1722]: time="2025-01-14T13:07:25.833766890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:6,}" Jan 14 13:07:25.944198 containerd[1722]: time="2025-01-14T13:07:25.944064205Z" level=error msg="Failed to destroy network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.944548 containerd[1722]: time="2025-01-14T13:07:25.944413109Z" level=error msg="encountered an error cleaning up failed sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.944548 containerd[1722]: time="2025-01-14T13:07:25.944525011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.946952 kubelet[3426]: E0114 13:07:25.946818 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.946952 kubelet[3426]: E0114 13:07:25.946891 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:25.946952 kubelet[3426]: E0114 13:07:25.946920 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:25.947309 kubelet[3426]: E0114 13:07:25.946985 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:25.984188 containerd[1722]: time="2025-01-14T13:07:25.984135847Z" level=error msg="Failed to destroy network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.984515 containerd[1722]: time="2025-01-14T13:07:25.984476351Z" level=error msg="encountered an error cleaning up failed sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.984625 containerd[1722]: time="2025-01-14T13:07:25.984553752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.984903 kubelet[3426]: E0114 13:07:25.984863 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:25.984999 kubelet[3426]: E0114 13:07:25.984931 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:25.984999 kubelet[3426]: E0114 13:07:25.984961 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:25.985137 kubelet[3426]: E0114 13:07:25.985104 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:26.080089 containerd[1722]: time="2025-01-14T13:07:26.080040614Z" level=error msg="Failed to destroy network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:26.080396 containerd[1722]: time="2025-01-14T13:07:26.080361318Z" level=error msg="encountered an error cleaning up failed sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:26.080484 containerd[1722]: time="2025-01-14T13:07:26.080436619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:26.080791 kubelet[3426]: E0114 13:07:26.080713 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:26.080791 kubelet[3426]: E0114 13:07:26.080793 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:26.080981 kubelet[3426]: E0114 13:07:26.080818 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:26.080981 kubelet[3426]: E0114 13:07:26.080893 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:26.496714 containerd[1722]: time="2025-01-14T13:07:26.493806092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:26.500610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a-shm.mount: Deactivated successfully. Jan 14 13:07:26.500837 systemd[1]: run-netns-cni\x2dfc63903d\x2d2b85\x2def8a\x2d30c7\x2dbb06f6324e5e.mount: Deactivated successfully. Jan 14 13:07:26.500925 systemd[1]: run-netns-cni\x2de1002e52\x2d0cfd\x2d8854\x2df141\x2da4d46e166f70.mount: Deactivated successfully. Jan 14 13:07:26.500996 systemd[1]: run-netns-cni\x2dd4bc5ff2\x2d913a\x2d8c88\x2d67e8\x2dfa80f2a8ed11.mount: Deactivated successfully. Jan 14 13:07:26.837920 kubelet[3426]: I0114 13:07:26.837800 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d" Jan 14 13:07:26.840880 containerd[1722]: time="2025-01-14T13:07:26.838601373Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" Jan 14 13:07:26.840880 containerd[1722]: time="2025-01-14T13:07:26.838861976Z" level=info msg="Ensure that sandbox 2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d in task-service has been cleanup successfully" Jan 14 13:07:26.844148 containerd[1722]: time="2025-01-14T13:07:26.844113536Z" level=info msg="TearDown network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" successfully" Jan 14 13:07:26.844485 containerd[1722]: time="2025-01-14T13:07:26.844144737Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" returns successfully" Jan 14 13:07:26.845192 containerd[1722]: time="2025-01-14T13:07:26.845160148Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:26.845323 containerd[1722]: time="2025-01-14T13:07:26.845265250Z" level=info msg="TearDown network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" successfully" Jan 14 13:07:26.845371 containerd[1722]: time="2025-01-14T13:07:26.845323950Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" returns successfully" Jan 14 13:07:26.846081 systemd[1]: run-netns-cni\x2dfd87fe6a\x2defd6\x2dd29f\x2db235\x2d5a10241021d6.mount: Deactivated successfully. Jan 14 13:07:26.847150 kubelet[3426]: I0114 13:07:26.847122 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154" Jan 14 13:07:26.851707 containerd[1722]: time="2025-01-14T13:07:26.847667477Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" Jan 14 13:07:26.851707 containerd[1722]: time="2025-01-14T13:07:26.847911280Z" level=info msg="Ensure that sandbox a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154 in task-service has been cleanup successfully" Jan 14 13:07:26.851707 containerd[1722]: time="2025-01-14T13:07:26.848864191Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:26.851707 containerd[1722]: time="2025-01-14T13:07:26.848954192Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:26.851707 containerd[1722]: time="2025-01-14T13:07:26.848969192Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:26.852419 containerd[1722]: time="2025-01-14T13:07:26.852251430Z" level=info msg="TearDown network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" successfully" Jan 14 13:07:26.852419 containerd[1722]: time="2025-01-14T13:07:26.852332531Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" returns successfully" Jan 14 13:07:26.852942 containerd[1722]: time="2025-01-14T13:07:26.852679635Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:26.852942 containerd[1722]: time="2025-01-14T13:07:26.852902438Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:26.852942 containerd[1722]: time="2025-01-14T13:07:26.852918038Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:26.853372 systemd[1]: run-netns-cni\x2d10c79115\x2dafb7\x2d1b73\x2de300\x2dffbe779fb29f.mount: Deactivated successfully. Jan 14 13:07:26.854962 containerd[1722]: time="2025-01-14T13:07:26.854905461Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:26.855042 containerd[1722]: time="2025-01-14T13:07:26.855000362Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:26.855042 containerd[1722]: time="2025-01-14T13:07:26.855017262Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:26.855445 containerd[1722]: time="2025-01-14T13:07:26.855296966Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:26.855445 containerd[1722]: time="2025-01-14T13:07:26.855390667Z" level=info msg="TearDown network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" successfully" Jan 14 13:07:26.855445 containerd[1722]: time="2025-01-14T13:07:26.855404467Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" returns successfully" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.855881172Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.856342778Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.856358678Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.856061574Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.856473479Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:26.856994 containerd[1722]: time="2025-01-14T13:07:26.856486179Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:26.857797 containerd[1722]: time="2025-01-14T13:07:26.857753594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:6,}" Jan 14 13:07:26.858249 containerd[1722]: time="2025-01-14T13:07:26.858223299Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:26.858352 containerd[1722]: time="2025-01-14T13:07:26.858332601Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:26.858397 containerd[1722]: time="2025-01-14T13:07:26.858352701Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:26.859082 containerd[1722]: time="2025-01-14T13:07:26.859055609Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:26.859341 containerd[1722]: time="2025-01-14T13:07:26.859310912Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:26.859341 containerd[1722]: time="2025-01-14T13:07:26.859331712Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:26.859921 containerd[1722]: time="2025-01-14T13:07:26.859895019Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:26.860198 kubelet[3426]: I0114 13:07:26.860159 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a" Jan 14 13:07:26.860491 containerd[1722]: time="2025-01-14T13:07:26.860458125Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:26.860491 containerd[1722]: time="2025-01-14T13:07:26.860478225Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:26.861125 containerd[1722]: time="2025-01-14T13:07:26.861039632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:6,}" Jan 14 13:07:26.861125 containerd[1722]: time="2025-01-14T13:07:26.861087332Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" Jan 14 13:07:26.861585 containerd[1722]: time="2025-01-14T13:07:26.861560338Z" level=info msg="Ensure that sandbox c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a in task-service has been cleanup successfully" Jan 14 13:07:26.861809 containerd[1722]: time="2025-01-14T13:07:26.861766940Z" level=info msg="TearDown network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" successfully" Jan 14 13:07:26.861809 containerd[1722]: time="2025-01-14T13:07:26.861794441Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" returns successfully" Jan 14 13:07:26.864714 containerd[1722]: time="2025-01-14T13:07:26.862193245Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:26.864714 containerd[1722]: time="2025-01-14T13:07:26.862290746Z" level=info msg="TearDown network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" successfully" Jan 14 13:07:26.864714 containerd[1722]: time="2025-01-14T13:07:26.862305546Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" returns successfully" Jan 14 13:07:26.866290 containerd[1722]: time="2025-01-14T13:07:26.865161179Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:26.866290 containerd[1722]: time="2025-01-14T13:07:26.865261281Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:26.866290 containerd[1722]: time="2025-01-14T13:07:26.865275281Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:26.866803 containerd[1722]: time="2025-01-14T13:07:26.866780998Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:26.867220 containerd[1722]: time="2025-01-14T13:07:26.867006101Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:26.867503 systemd[1]: run-netns-cni\x2dae8d8b54\x2dcec4\x2d92d8\x2d224b\x2d78c5009e206c.mount: Deactivated successfully. Jan 14 13:07:26.868571 containerd[1722]: time="2025-01-14T13:07:26.867556607Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:26.869942 containerd[1722]: time="2025-01-14T13:07:26.869056924Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:26.869942 containerd[1722]: time="2025-01-14T13:07:26.869853234Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:26.869942 containerd[1722]: time="2025-01-14T13:07:26.869879834Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:26.870917 containerd[1722]: time="2025-01-14T13:07:26.870330739Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:26.870917 containerd[1722]: time="2025-01-14T13:07:26.870882445Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:26.870917 containerd[1722]: time="2025-01-14T13:07:26.870899046Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:26.871628 containerd[1722]: time="2025-01-14T13:07:26.871540053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:6,}" Jan 14 13:07:26.888405 containerd[1722]: time="2025-01-14T13:07:26.888316247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 14 13:07:27.277961 containerd[1722]: time="2025-01-14T13:07:27.277871645Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:27.732383 containerd[1722]: time="2025-01-14T13:07:27.732332692Z" level=error msg="Failed to destroy network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.732890 containerd[1722]: time="2025-01-14T13:07:27.732781297Z" level=error msg="encountered an error cleaning up failed sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.732890 containerd[1722]: time="2025-01-14T13:07:27.732853798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.735218 kubelet[3426]: E0114 13:07:27.733134 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.735218 kubelet[3426]: E0114 13:07:27.733194 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:27.735218 kubelet[3426]: E0114 13:07:27.733222 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:27.735428 kubelet[3426]: E0114 13:07:27.733288 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:27.740197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e-shm.mount: Deactivated successfully. Jan 14 13:07:27.832729 containerd[1722]: time="2025-01-14T13:07:27.832662350Z" level=error msg="Failed to destroy network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.833939 containerd[1722]: time="2025-01-14T13:07:27.833739663Z" level=error msg="encountered an error cleaning up failed sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.833939 containerd[1722]: time="2025-01-14T13:07:27.833821564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.835126 kubelet[3426]: E0114 13:07:27.834965 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:27.835126 kubelet[3426]: E0114 13:07:27.835047 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:27.835126 kubelet[3426]: E0114 13:07:27.835089 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:27.835302 kubelet[3426]: E0114 13:07:27.835175 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:27.836535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece-shm.mount: Deactivated successfully. Jan 14 13:07:27.865668 kubelet[3426]: I0114 13:07:27.865633 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece" Jan 14 13:07:27.867024 containerd[1722]: time="2025-01-14T13:07:27.866837245Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" Jan 14 13:07:27.867138 containerd[1722]: time="2025-01-14T13:07:27.867092048Z" level=info msg="Ensure that sandbox d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece in task-service has been cleanup successfully" Jan 14 13:07:27.870736 containerd[1722]: time="2025-01-14T13:07:27.870034782Z" level=info msg="TearDown network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" successfully" Jan 14 13:07:27.870736 containerd[1722]: time="2025-01-14T13:07:27.870064482Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" returns successfully" Jan 14 13:07:27.870827 systemd[1]: run-netns-cni\x2da1dc8f94\x2d9bb2\x2d7eb0\x2d635b\x2d21012c878273.mount: Deactivated successfully. Jan 14 13:07:27.872454 containerd[1722]: time="2025-01-14T13:07:27.871109294Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:27.872454 containerd[1722]: time="2025-01-14T13:07:27.871205795Z" level=info msg="TearDown network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" successfully" Jan 14 13:07:27.872454 containerd[1722]: time="2025-01-14T13:07:27.871219495Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" returns successfully" Jan 14 13:07:27.873277 containerd[1722]: time="2025-01-14T13:07:27.872984116Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:27.873277 containerd[1722]: time="2025-01-14T13:07:27.873079917Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:27.873277 containerd[1722]: time="2025-01-14T13:07:27.873094117Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:27.873462 containerd[1722]: time="2025-01-14T13:07:27.873428521Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:27.873586 containerd[1722]: time="2025-01-14T13:07:27.873517322Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:27.873586 containerd[1722]: time="2025-01-14T13:07:27.873532522Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:27.873898 containerd[1722]: time="2025-01-14T13:07:27.873879826Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:27.874107 containerd[1722]: time="2025-01-14T13:07:27.874045428Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:27.874107 containerd[1722]: time="2025-01-14T13:07:27.874064328Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:27.874754 containerd[1722]: time="2025-01-14T13:07:27.874593534Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:27.874754 containerd[1722]: time="2025-01-14T13:07:27.874681035Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:27.874754 containerd[1722]: time="2025-01-14T13:07:27.874728636Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:27.875714 containerd[1722]: time="2025-01-14T13:07:27.875294142Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:27.875714 containerd[1722]: time="2025-01-14T13:07:27.875383843Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:27.875714 containerd[1722]: time="2025-01-14T13:07:27.875398744Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:27.876150 containerd[1722]: time="2025-01-14T13:07:27.876120952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:7,}" Jan 14 13:07:27.876997 kubelet[3426]: I0114 13:07:27.876973 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e" Jan 14 13:07:27.878395 containerd[1722]: time="2025-01-14T13:07:27.877476868Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" Jan 14 13:07:27.878395 containerd[1722]: time="2025-01-14T13:07:27.877891972Z" level=info msg="Ensure that sandbox 02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e in task-service has been cleanup successfully" Jan 14 13:07:27.878395 containerd[1722]: time="2025-01-14T13:07:27.878140375Z" level=info msg="TearDown network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" successfully" Jan 14 13:07:27.878395 containerd[1722]: time="2025-01-14T13:07:27.878156675Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" returns successfully" Jan 14 13:07:27.878655 containerd[1722]: time="2025-01-14T13:07:27.878627381Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:27.878851 containerd[1722]: time="2025-01-14T13:07:27.878742682Z" level=info msg="TearDown network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" successfully" Jan 14 13:07:27.878851 containerd[1722]: time="2025-01-14T13:07:27.878762182Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" returns successfully" Jan 14 13:07:27.879133 containerd[1722]: time="2025-01-14T13:07:27.879034086Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:27.879133 containerd[1722]: time="2025-01-14T13:07:27.879120287Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:27.879215 containerd[1722]: time="2025-01-14T13:07:27.879133487Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:27.879683 containerd[1722]: time="2025-01-14T13:07:27.879562492Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:27.879683 containerd[1722]: time="2025-01-14T13:07:27.879684493Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:27.879919 containerd[1722]: time="2025-01-14T13:07:27.879714693Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:27.880402 containerd[1722]: time="2025-01-14T13:07:27.879974996Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:27.880402 containerd[1722]: time="2025-01-14T13:07:27.880055497Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:27.880402 containerd[1722]: time="2025-01-14T13:07:27.880068998Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:27.880577 containerd[1722]: time="2025-01-14T13:07:27.880452602Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:27.880577 containerd[1722]: time="2025-01-14T13:07:27.880554003Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:27.880577 containerd[1722]: time="2025-01-14T13:07:27.880568103Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:27.881044 containerd[1722]: time="2025-01-14T13:07:27.881018508Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:27.881130 containerd[1722]: time="2025-01-14T13:07:27.881108610Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:27.881130 containerd[1722]: time="2025-01-14T13:07:27.881124110Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:27.881583 containerd[1722]: time="2025-01-14T13:07:27.881555215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:7,}" Jan 14 13:07:28.192259 containerd[1722]: time="2025-01-14T13:07:28.192194901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:28.193208 containerd[1722]: time="2025-01-14T13:07:28.192959610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 15.633424182s" Jan 14 13:07:28.193208 containerd[1722]: time="2025-01-14T13:07:28.193023211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 14 13:07:28.201980 containerd[1722]: time="2025-01-14T13:07:28.201938214Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:07:28.290190 containerd[1722]: time="2025-01-14T13:07:28.290137632Z" level=error msg="Failed to destroy network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:28.290494 containerd[1722]: time="2025-01-14T13:07:28.290461436Z" level=error msg="encountered an error cleaning up failed sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:28.290591 containerd[1722]: time="2025-01-14T13:07:28.290529637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:28.290842 kubelet[3426]: E0114 13:07:28.290812 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:28.290946 kubelet[3426]: E0114 13:07:28.290888 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:28.290946 kubelet[3426]: E0114 13:07:28.290919 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" Jan 14 13:07:28.291091 kubelet[3426]: E0114 13:07:28.291059 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-mfw8m_calico-apiserver(1798181d-c5b0-4589-8e4b-80c339c21d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podUID="1798181d-c5b0-4589-8e4b-80c339c21d34" Jan 14 13:07:28.501565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e-shm.mount: Deactivated successfully. Jan 14 13:07:28.501855 systemd[1]: run-netns-cni\x2d00f3cb95\x2d24b9\x2dc3c7\x2dce3f\x2dee712437a1a8.mount: Deactivated successfully. Jan 14 13:07:28.883804 kubelet[3426]: I0114 13:07:28.883615 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e" Jan 14 13:07:28.885914 containerd[1722]: time="2025-01-14T13:07:28.884873499Z" level=info msg="StopPodSandbox for \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\"" Jan 14 13:07:28.885914 containerd[1722]: time="2025-01-14T13:07:28.885179903Z" level=info msg="Ensure that sandbox 10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e in task-service has been cleanup successfully" Jan 14 13:07:28.886930 containerd[1722]: time="2025-01-14T13:07:28.886830222Z" level=info msg="TearDown network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" successfully" Jan 14 13:07:28.886930 containerd[1722]: time="2025-01-14T13:07:28.886859122Z" level=info msg="StopPodSandbox for \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" returns successfully" Jan 14 13:07:28.887686 containerd[1722]: time="2025-01-14T13:07:28.887346828Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" Jan 14 13:07:28.887686 containerd[1722]: time="2025-01-14T13:07:28.887443529Z" level=info msg="TearDown network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" successfully" Jan 14 13:07:28.887686 containerd[1722]: time="2025-01-14T13:07:28.887457329Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" returns successfully" Jan 14 13:07:28.888393 containerd[1722]: time="2025-01-14T13:07:28.888014435Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:28.888393 containerd[1722]: time="2025-01-14T13:07:28.888137137Z" level=info msg="TearDown network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" successfully" Jan 14 13:07:28.888393 containerd[1722]: time="2025-01-14T13:07:28.888152637Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" returns successfully" Jan 14 13:07:28.888943 containerd[1722]: time="2025-01-14T13:07:28.888773444Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:28.888943 containerd[1722]: time="2025-01-14T13:07:28.888866945Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:28.888943 containerd[1722]: time="2025-01-14T13:07:28.888882345Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:28.889648 containerd[1722]: time="2025-01-14T13:07:28.889357651Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:28.889648 containerd[1722]: time="2025-01-14T13:07:28.889445352Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:28.889648 containerd[1722]: time="2025-01-14T13:07:28.889460852Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:28.889865 containerd[1722]: time="2025-01-14T13:07:28.889774356Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:28.890037 containerd[1722]: time="2025-01-14T13:07:28.889959758Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:28.890037 containerd[1722]: time="2025-01-14T13:07:28.889980158Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:28.890274 containerd[1722]: time="2025-01-14T13:07:28.890234861Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:28.890346 containerd[1722]: time="2025-01-14T13:07:28.890326462Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:28.890402 containerd[1722]: time="2025-01-14T13:07:28.890345862Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:28.890872 containerd[1722]: time="2025-01-14T13:07:28.890849368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:7,}" Jan 14 13:07:29.232482 containerd[1722]: time="2025-01-14T13:07:29.232430212Z" level=error msg="Failed to destroy network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.232800 containerd[1722]: time="2025-01-14T13:07:29.232766416Z" level=error msg="encountered an error cleaning up failed sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.232879 containerd[1722]: time="2025-01-14T13:07:29.232843417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.233129 kubelet[3426]: E0114 13:07:29.233104 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.233223 kubelet[3426]: E0114 13:07:29.233164 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:29.233223 kubelet[3426]: E0114 13:07:29.233192 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6l9p" Jan 14 13:07:29.233309 kubelet[3426]: E0114 13:07:29.233266 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6l9p_calico-system(a334eebb-fcba-4d16-8280-bef7ba8849b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6l9p" podUID="a334eebb-fcba-4d16-8280-bef7ba8849b0" Jan 14 13:07:29.329723 containerd[1722]: time="2025-01-14T13:07:29.329582834Z" level=error msg="Failed to destroy network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.329966 containerd[1722]: time="2025-01-14T13:07:29.329923538Z" level=error msg="encountered an error cleaning up failed sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.330054 containerd[1722]: time="2025-01-14T13:07:29.330009639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.330322 kubelet[3426]: E0114 13:07:29.330296 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.330421 kubelet[3426]: E0114 13:07:29.330363 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:29.330421 kubelet[3426]: E0114 13:07:29.330396 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" Jan 14 13:07:29.330505 kubelet[3426]: E0114 13:07:29.330480 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcf9d67d5-qgrsw_calico-apiserver(f24a6417-91e6-4261-aa71-8c79526d4ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podUID="f24a6417-91e6-4261-aa71-8c79526d4ae0" Jan 14 13:07:29.376902 containerd[1722]: time="2025-01-14T13:07:29.376846379Z" level=error msg="Failed to destroy network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.377193 containerd[1722]: time="2025-01-14T13:07:29.377162483Z" level=error msg="encountered an error cleaning up failed sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.377302 containerd[1722]: time="2025-01-14T13:07:29.377231884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.377561 kubelet[3426]: E0114 13:07:29.377537 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:29.377659 kubelet[3426]: E0114 13:07:29.377602 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:29.377659 kubelet[3426]: E0114 13:07:29.377633 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-txqcv" Jan 14 13:07:29.377781 kubelet[3426]: E0114 13:07:29.377728 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-txqcv_kube-system(400f3e16-4883-45bf-811c-322b770038b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-txqcv" podUID="400f3e16-4883-45bf-811c-322b770038b8" Jan 14 13:07:29.497761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2-shm.mount: Deactivated successfully. Jan 14 13:07:29.497880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5-shm.mount: Deactivated successfully. Jan 14 13:07:29.497963 systemd[1]: run-netns-cni\x2d8036edbb\x2d014b\x2d3e71\x2dbbbf\x2d0ebc8c9dac17.mount: Deactivated successfully. Jan 14 13:07:29.888657 kubelet[3426]: I0114 13:07:29.888516 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d" Jan 14 13:07:29.891004 containerd[1722]: time="2025-01-14T13:07:29.890425909Z" level=info msg="StopPodSandbox for \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\"" Jan 14 13:07:29.891004 containerd[1722]: time="2025-01-14T13:07:29.890818114Z" level=info msg="Ensure that sandbox 8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d in task-service has been cleanup successfully" Jan 14 13:07:29.893092 containerd[1722]: time="2025-01-14T13:07:29.892832237Z" level=info msg="TearDown network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" successfully" Jan 14 13:07:29.893092 containerd[1722]: time="2025-01-14T13:07:29.892857237Z" level=info msg="StopPodSandbox for \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" returns successfully" Jan 14 13:07:29.894125 containerd[1722]: time="2025-01-14T13:07:29.893345643Z" level=info msg="CreateContainer within sandbox \"0efe0ec1d3205b8a66cd7c5f2b3c0f43ffa26139d371bacea5fb35b2d4ca88fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669\"" Jan 14 13:07:29.894125 containerd[1722]: time="2025-01-14T13:07:29.893517445Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" Jan 14 13:07:29.894125 containerd[1722]: time="2025-01-14T13:07:29.893611846Z" level=info msg="TearDown network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" successfully" Jan 14 13:07:29.894125 containerd[1722]: time="2025-01-14T13:07:29.893626846Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" returns successfully" Jan 14 13:07:29.894551 containerd[1722]: time="2025-01-14T13:07:29.894527956Z" level=info msg="StartContainer for \"508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669\"" Jan 14 13:07:29.894889 containerd[1722]: time="2025-01-14T13:07:29.894770959Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:29.895053 containerd[1722]: time="2025-01-14T13:07:29.894863160Z" level=info msg="TearDown network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" successfully" Jan 14 13:07:29.895053 containerd[1722]: time="2025-01-14T13:07:29.894909161Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" returns successfully" Jan 14 13:07:29.899727 containerd[1722]: time="2025-01-14T13:07:29.899304712Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:29.899727 containerd[1722]: time="2025-01-14T13:07:29.899413213Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:29.899727 containerd[1722]: time="2025-01-14T13:07:29.899428613Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:29.899917 containerd[1722]: time="2025-01-14T13:07:29.899842518Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:29.900080 containerd[1722]: time="2025-01-14T13:07:29.899970419Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:29.900080 containerd[1722]: time="2025-01-14T13:07:29.899988619Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:29.900642 containerd[1722]: time="2025-01-14T13:07:29.900427425Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:29.900798 containerd[1722]: time="2025-01-14T13:07:29.900776129Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:29.900927 containerd[1722]: time="2025-01-14T13:07:29.900799729Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:29.901674 kubelet[3426]: I0114 13:07:29.901556 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2" Jan 14 13:07:29.903265 containerd[1722]: time="2025-01-14T13:07:29.902604450Z" level=info msg="StopPodSandbox for \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\"" Jan 14 13:07:29.903265 containerd[1722]: time="2025-01-14T13:07:29.902983454Z" level=info msg="Ensure that sandbox 4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2 in task-service has been cleanup successfully" Jan 14 13:07:29.903593 containerd[1722]: time="2025-01-14T13:07:29.903465660Z" level=info msg="TearDown network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" successfully" Jan 14 13:07:29.903593 containerd[1722]: time="2025-01-14T13:07:29.903492160Z" level=info msg="StopPodSandbox for \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" returns successfully" Jan 14 13:07:29.906354 containerd[1722]: time="2025-01-14T13:07:29.904551772Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:29.906354 containerd[1722]: time="2025-01-14T13:07:29.904831475Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:29.906354 containerd[1722]: time="2025-01-14T13:07:29.904851976Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:29.906788 containerd[1722]: time="2025-01-14T13:07:29.906753398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:7,}" Jan 14 13:07:29.907031 containerd[1722]: time="2025-01-14T13:07:29.907007401Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" Jan 14 13:07:29.907129 containerd[1722]: time="2025-01-14T13:07:29.907098802Z" level=info msg="TearDown network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" successfully" Jan 14 13:07:29.907129 containerd[1722]: time="2025-01-14T13:07:29.907113802Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" returns successfully" Jan 14 13:07:29.909175 containerd[1722]: time="2025-01-14T13:07:29.909138625Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:29.910844 containerd[1722]: time="2025-01-14T13:07:29.910794644Z" level=info msg="TearDown network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" successfully" Jan 14 13:07:29.910844 containerd[1722]: time="2025-01-14T13:07:29.910822545Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" returns successfully" Jan 14 13:07:29.912384 containerd[1722]: time="2025-01-14T13:07:29.912286761Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:29.913181 containerd[1722]: time="2025-01-14T13:07:29.912940769Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:29.913181 containerd[1722]: time="2025-01-14T13:07:29.912961069Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:29.915178 containerd[1722]: time="2025-01-14T13:07:29.914481587Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:29.915338 containerd[1722]: time="2025-01-14T13:07:29.915317496Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:29.917549 containerd[1722]: time="2025-01-14T13:07:29.916477010Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:29.918058 containerd[1722]: time="2025-01-14T13:07:29.918030228Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:29.918143 containerd[1722]: time="2025-01-14T13:07:29.918126729Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:29.918185 containerd[1722]: time="2025-01-14T13:07:29.918140929Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:29.918903 containerd[1722]: time="2025-01-14T13:07:29.918786637Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:29.919143 containerd[1722]: time="2025-01-14T13:07:29.919006639Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:29.919143 containerd[1722]: time="2025-01-14T13:07:29.919027339Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:29.920607 kubelet[3426]: I0114 13:07:29.920581 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5" Jan 14 13:07:29.922031 containerd[1722]: time="2025-01-14T13:07:29.921982673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:7,}" Jan 14 13:07:29.922435 containerd[1722]: time="2025-01-14T13:07:29.922410178Z" level=info msg="StopPodSandbox for \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\"" Jan 14 13:07:29.922708 containerd[1722]: time="2025-01-14T13:07:29.922668981Z" level=info msg="Ensure that sandbox 9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5 in task-service has been cleanup successfully" Jan 14 13:07:29.924894 containerd[1722]: time="2025-01-14T13:07:29.924843706Z" level=info msg="TearDown network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" successfully" Jan 14 13:07:29.924894 containerd[1722]: time="2025-01-14T13:07:29.924874307Z" level=info msg="StopPodSandbox for \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" returns successfully" Jan 14 13:07:29.925578 containerd[1722]: time="2025-01-14T13:07:29.925553015Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" Jan 14 13:07:29.925846 containerd[1722]: time="2025-01-14T13:07:29.925827418Z" level=info msg="TearDown network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" successfully" Jan 14 13:07:29.926063 containerd[1722]: time="2025-01-14T13:07:29.926043320Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" returns successfully" Jan 14 13:07:29.927960 containerd[1722]: time="2025-01-14T13:07:29.927917142Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:29.928405 containerd[1722]: time="2025-01-14T13:07:29.928367647Z" level=info msg="TearDown network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" successfully" Jan 14 13:07:29.928626 containerd[1722]: time="2025-01-14T13:07:29.928588850Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" returns successfully" Jan 14 13:07:29.929116 containerd[1722]: time="2025-01-14T13:07:29.929064355Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:29.929228 containerd[1722]: time="2025-01-14T13:07:29.929167456Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:29.929228 containerd[1722]: time="2025-01-14T13:07:29.929182957Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:29.929600 containerd[1722]: time="2025-01-14T13:07:29.929573661Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:29.930503 containerd[1722]: time="2025-01-14T13:07:29.930475171Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:29.930503 containerd[1722]: time="2025-01-14T13:07:29.930502672Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:29.930841 containerd[1722]: time="2025-01-14T13:07:29.930819775Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:29.931028 containerd[1722]: time="2025-01-14T13:07:29.931010078Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:29.931109 containerd[1722]: time="2025-01-14T13:07:29.931095979Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:29.931668 containerd[1722]: time="2025-01-14T13:07:29.931648085Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:29.931858 containerd[1722]: time="2025-01-14T13:07:29.931838287Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:29.931944 containerd[1722]: time="2025-01-14T13:07:29.931930588Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:29.932904 containerd[1722]: time="2025-01-14T13:07:29.932878599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:7,}" Jan 14 13:07:29.958901 systemd[1]: Started cri-containerd-508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669.scope - libcontainer container 508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669. Jan 14 13:07:30.002816 containerd[1722]: time="2025-01-14T13:07:30.002753606Z" level=error msg="Failed to destroy network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.003768 containerd[1722]: time="2025-01-14T13:07:30.003731817Z" level=error msg="encountered an error cleaning up failed sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.004033 containerd[1722]: time="2025-01-14T13:07:30.003933720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.004746 kubelet[3426]: E0114 13:07:30.004717 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.005428 kubelet[3426]: E0114 13:07:30.005372 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:30.006030 kubelet[3426]: E0114 13:07:30.005717 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g846c" Jan 14 13:07:30.006363 kubelet[3426]: E0114 13:07:30.006179 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g846c_kube-system(5de7d327-5d34-4f9c-b581-c52c0a00d0b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g846c" podUID="5de7d327-5d34-4f9c-b581-c52c0a00d0b7" Jan 14 13:07:30.075446 containerd[1722]: time="2025-01-14T13:07:30.075301344Z" level=error msg="Failed to destroy network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.076052 containerd[1722]: time="2025-01-14T13:07:30.075907751Z" level=error msg="encountered an error cleaning up failed sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.076052 containerd[1722]: time="2025-01-14T13:07:30.075997552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.077012 kubelet[3426]: E0114 13:07:30.076577 3426 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:07:30.077012 kubelet[3426]: E0114 13:07:30.076639 3426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:30.077012 kubelet[3426]: E0114 13:07:30.076673 3426 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" Jan 14 13:07:30.077199 kubelet[3426]: E0114 13:07:30.076760 3426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddf9dc45-phpck_calico-system(1b81bba4-2ff3-462c-8b85-47035070eff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podUID="1b81bba4-2ff3-462c-8b85-47035070eff8" Jan 14 13:07:30.080582 containerd[1722]: time="2025-01-14T13:07:30.080437303Z" level=info msg="StartContainer for \"508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669\" returns successfully" Jan 14 13:07:30.500857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67-shm.mount: Deactivated successfully. Jan 14 13:07:30.501385 systemd[1]: run-netns-cni\x2d40b20dc3\x2d7a44\x2d6222\x2d0266\x2d6e634a34ca30.mount: Deactivated successfully. Jan 14 13:07:30.501590 systemd[1]: run-netns-cni\x2d8a680873\x2d7599\x2d1458\x2d632b\x2db8f7b8c59923.mount: Deactivated successfully. Jan 14 13:07:30.501793 systemd[1]: run-netns-cni\x2d8113f3aa\x2d68f8\x2d81a8\x2d321c\x2d0d1a31337a3d.mount: Deactivated successfully. Jan 14 13:07:30.597974 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:07:30.632235 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 14 13:07:30.927290 kubelet[3426]: I0114 13:07:30.927255 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67" Jan 14 13:07:30.928323 containerd[1722]: time="2025-01-14T13:07:30.928284092Z" level=info msg="StopPodSandbox for \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\"" Jan 14 13:07:30.932754 containerd[1722]: time="2025-01-14T13:07:30.928543295Z" level=info msg="Ensure that sandbox 9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67 in task-service has been cleanup successfully" Jan 14 13:07:30.932754 containerd[1722]: time="2025-01-14T13:07:30.932088136Z" level=info msg="TearDown network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" successfully" Jan 14 13:07:30.932754 containerd[1722]: time="2025-01-14T13:07:30.932109436Z" level=info msg="StopPodSandbox for \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" returns successfully" Jan 14 13:07:30.934316 systemd[1]: run-netns-cni\x2dac909be0\x2d3bf1\x2d055c\x2d8cdd\x2d33760eb92d78.mount: Deactivated successfully. Jan 14 13:07:30.935147 containerd[1722]: time="2025-01-14T13:07:30.935121971Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" Jan 14 13:07:30.941124 containerd[1722]: time="2025-01-14T13:07:30.935988481Z" level=info msg="TearDown network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" successfully" Jan 14 13:07:30.941124 containerd[1722]: time="2025-01-14T13:07:30.936013281Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" returns successfully" Jan 14 13:07:30.942448 containerd[1722]: time="2025-01-14T13:07:30.942126152Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:30.944669 containerd[1722]: time="2025-01-14T13:07:30.942592157Z" level=info msg="TearDown network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" successfully" Jan 14 13:07:30.944669 containerd[1722]: time="2025-01-14T13:07:30.942612458Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" returns successfully" Jan 14 13:07:30.944915 containerd[1722]: time="2025-01-14T13:07:30.944889284Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:30.946541 containerd[1722]: time="2025-01-14T13:07:30.946517903Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:30.949135 containerd[1722]: time="2025-01-14T13:07:30.948835929Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:30.951959 containerd[1722]: time="2025-01-14T13:07:30.951933065Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:30.952307 containerd[1722]: time="2025-01-14T13:07:30.952173768Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:30.952307 containerd[1722]: time="2025-01-14T13:07:30.952218668Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:30.953341 containerd[1722]: time="2025-01-14T13:07:30.953172980Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:30.953562 containerd[1722]: time="2025-01-14T13:07:30.953434783Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:30.953562 containerd[1722]: time="2025-01-14T13:07:30.953454283Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:30.955184 containerd[1722]: time="2025-01-14T13:07:30.954952300Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:30.955638 containerd[1722]: time="2025-01-14T13:07:30.955616808Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:30.956013 containerd[1722]: time="2025-01-14T13:07:30.955826610Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:30.957552 containerd[1722]: time="2025-01-14T13:07:30.957527030Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:30.958020 containerd[1722]: time="2025-01-14T13:07:30.957893634Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:30.958279 containerd[1722]: time="2025-01-14T13:07:30.958229538Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:30.959820 containerd[1722]: time="2025-01-14T13:07:30.959481352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:8,}" Jan 14 13:07:30.964510 kubelet[3426]: I0114 13:07:30.963423 3426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277" Jan 14 13:07:30.964629 containerd[1722]: time="2025-01-14T13:07:30.964316408Z" level=info msg="StopPodSandbox for \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\"" Jan 14 13:07:30.965034 containerd[1722]: time="2025-01-14T13:07:30.965009816Z" level=info msg="Ensure that sandbox 8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277 in task-service has been cleanup successfully" Jan 14 13:07:30.965714 containerd[1722]: time="2025-01-14T13:07:30.965679524Z" level=info msg="TearDown network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" successfully" Jan 14 13:07:30.965854 containerd[1722]: time="2025-01-14T13:07:30.965833026Z" level=info msg="StopPodSandbox for \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" returns successfully" Jan 14 13:07:30.967632 containerd[1722]: time="2025-01-14T13:07:30.967610446Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" Jan 14 13:07:30.967891 containerd[1722]: time="2025-01-14T13:07:30.967872349Z" level=info msg="TearDown network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" successfully" Jan 14 13:07:30.967987 containerd[1722]: time="2025-01-14T13:07:30.967971550Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" returns successfully" Jan 14 13:07:30.968333 containerd[1722]: time="2025-01-14T13:07:30.968311954Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:30.968587 containerd[1722]: time="2025-01-14T13:07:30.968568257Z" level=info msg="TearDown network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" successfully" Jan 14 13:07:30.968676 containerd[1722]: time="2025-01-14T13:07:30.968662258Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" returns successfully" Jan 14 13:07:30.970217 containerd[1722]: time="2025-01-14T13:07:30.970186576Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:30.970408 containerd[1722]: time="2025-01-14T13:07:30.970387678Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:30.971022 containerd[1722]: time="2025-01-14T13:07:30.970479279Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:30.971022 containerd[1722]: time="2025-01-14T13:07:30.970777383Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:30.971022 containerd[1722]: time="2025-01-14T13:07:30.970862284Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:30.971022 containerd[1722]: time="2025-01-14T13:07:30.970876084Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971294489Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971385790Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971399790Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971697493Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971787294Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:30.971857 containerd[1722]: time="2025-01-14T13:07:30.971801195Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:30.972922 containerd[1722]: time="2025-01-14T13:07:30.972343801Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:30.972922 containerd[1722]: time="2025-01-14T13:07:30.972432902Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:30.972922 containerd[1722]: time="2025-01-14T13:07:30.972446102Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:30.973524 containerd[1722]: time="2025-01-14T13:07:30.973499314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:8,}" Jan 14 13:07:31.296674 systemd-networkd[1327]: cali8346cc0f00c: Link UP Jan 14 13:07:31.297632 systemd-networkd[1327]: cali8346cc0f00c: Gained carrier Jan 14 13:07:31.348449 kubelet[3426]: I0114 13:07:31.346268 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hkwbz" podStartSLOduration=3.972395425 podStartE2EDuration="35.346208017s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:06:56.819966528 +0000 UTC m=+18.538757173" lastFinishedPulling="2025-01-14 13:07:28.19377912 +0000 UTC m=+49.912569765" observedRunningTime="2025-01-14 13:07:31.045307943 +0000 UTC m=+52.764098588" watchObservedRunningTime="2025-01-14 13:07:31.346208017 +0000 UTC m=+53.064998662" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.108 [INFO][5658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.120 [INFO][5658] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0 calico-apiserver-6dcf9d67d5- calico-apiserver 1798181d-c5b0-4589-8e4b-80c339c21d34 695 0 2025-01-14 13:06:56 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dcf9d67d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af calico-apiserver-6dcf9d67d5-mfw8m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8346cc0f00c [] []}} ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.120 [INFO][5658] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.161 [INFO][5669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" HandleID="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.187 [INFO][5669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" HandleID="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-f264a924af", "pod":"calico-apiserver-6dcf9d67d5-mfw8m", "timestamp":"2025-01-14 13:07:31.16111078 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.187 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.187 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.187 [INFO][5669] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.191 [INFO][5669] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.199 [INFO][5669] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.206 [INFO][5669] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.208 [INFO][5669] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.213 [INFO][5669] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.213 [INFO][5669] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.214 [INFO][5669] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00 Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.247 [INFO][5669] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.254 [INFO][5669] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.129/26] block=192.168.29.128/26 handle="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.254 [INFO][5669] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.129/26] handle="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.254 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:31.351086 containerd[1722]: 2025-01-14 13:07:31.255 [INFO][5669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.129/26] IPv6=[] ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" HandleID="k8s-pod-network.4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.259 [INFO][5658] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0", GenerateName:"calico-apiserver-6dcf9d67d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1798181d-c5b0-4589-8e4b-80c339c21d34", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcf9d67d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"calico-apiserver-6dcf9d67d5-mfw8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8346cc0f00c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.259 [INFO][5658] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.129/32] ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.259 [INFO][5658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8346cc0f00c ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.297 [INFO][5658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.298 [INFO][5658] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0", GenerateName:"calico-apiserver-6dcf9d67d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1798181d-c5b0-4589-8e4b-80c339c21d34", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcf9d67d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00", Pod:"calico-apiserver-6dcf9d67d5-mfw8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8346cc0f00c", MAC:"d6:48:c2:b0:f0:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.353660 containerd[1722]: 2025-01-14 13:07:31.345 [INFO][5658] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-mfw8m" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--mfw8m-eth0" Jan 14 13:07:31.376549 systemd-networkd[1327]: cali3db47eb392f: Link UP Jan 14 13:07:31.376984 systemd-networkd[1327]: cali3db47eb392f: Gained carrier Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.227 [INFO][5675] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.250 [INFO][5675] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0 coredns-76f75df574- kube-system 400f3e16-4883-45bf-811c-322b770038b8 688 0 2025-01-14 13:06:49 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af coredns-76f75df574-txqcv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3db47eb392f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.251 [INFO][5675] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.278 [INFO][5699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" HandleID="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.304 [INFO][5699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" HandleID="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319420), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-f264a924af", "pod":"coredns-76f75df574-txqcv", "timestamp":"2025-01-14 13:07:31.278918841 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.304 [INFO][5699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.304 [INFO][5699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.304 [INFO][5699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.307 [INFO][5699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.350 [INFO][5699] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.357 [INFO][5699] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.358 [INFO][5699] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.360 [INFO][5699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.360 [INFO][5699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.361 [INFO][5699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5 Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.366 [INFO][5699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.372 [INFO][5699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.130/26] block=192.168.29.128/26 handle="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.372 [INFO][5699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.130/26] handle="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.372 [INFO][5699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:31.395105 containerd[1722]: 2025-01-14 13:07:31.372 [INFO][5699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.130/26] IPv6=[] ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" HandleID="k8s-pod-network.2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.373 [INFO][5675] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"400f3e16-4883-45bf-811c-322b770038b8", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"coredns-76f75df574-txqcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db47eb392f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.373 [INFO][5675] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.130/32] ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.373 [INFO][5675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3db47eb392f ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.377 [INFO][5675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.378 [INFO][5675] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"400f3e16-4883-45bf-811c-322b770038b8", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5", Pod:"coredns-76f75df574-txqcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db47eb392f", MAC:"5a:a5:3d:5e:84:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.396076 containerd[1722]: 2025-01-14 13:07:31.392 [INFO][5675] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5" Namespace="kube-system" Pod="coredns-76f75df574-txqcv" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--txqcv-eth0" Jan 14 13:07:31.499160 systemd[1]: run-netns-cni\x2d4b277000\x2d700a\x2d3831\x2d5c04\x2d750ffdb729ed.mount: Deactivated successfully. Jan 14 13:07:31.628955 systemd-networkd[1327]: calia178d44c265: Link UP Jan 14 13:07:31.630016 systemd-networkd[1327]: calia178d44c265: Gained carrier Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.556 [INFO][5725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.566 [INFO][5725] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0 csi-node-driver- calico-system a334eebb-fcba-4d16-8280-bef7ba8849b0 594 0 2025-01-14 13:06:56 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af csi-node-driver-z6l9p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia178d44c265 [] []}} ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.566 [INFO][5725] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.590 [INFO][5736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" HandleID="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Workload="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.599 [INFO][5736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" HandleID="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Workload="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-f264a924af", "pod":"csi-node-driver-z6l9p", "timestamp":"2025-01-14 13:07:31.59067424 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.599 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.599 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.599 [INFO][5736] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.601 [INFO][5736] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.604 [INFO][5736] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.607 [INFO][5736] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.609 [INFO][5736] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.611 [INFO][5736] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.611 [INFO][5736] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.613 [INFO][5736] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.616 [INFO][5736] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.624 [INFO][5736] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.131/26] block=192.168.29.128/26 handle="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.624 [INFO][5736] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.131/26] handle="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.624 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:31.654499 containerd[1722]: 2025-01-14 13:07:31.624 [INFO][5736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.131/26] IPv6=[] ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" HandleID="k8s-pod-network.beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Workload="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.625 [INFO][5725] cni-plugin/k8s.go 386: Populated endpoint ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a334eebb-fcba-4d16-8280-bef7ba8849b0", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"csi-node-driver-z6l9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia178d44c265", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.626 [INFO][5725] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.131/32] ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.626 [INFO][5725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia178d44c265 ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.628 [INFO][5725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.628 [INFO][5725] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a334eebb-fcba-4d16-8280-bef7ba8849b0", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b", Pod:"csi-node-driver-z6l9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia178d44c265", MAC:"7e:52:53:fe:79:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:31.655630 containerd[1722]: 2025-01-14 13:07:31.647 [INFO][5725] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b" Namespace="calico-system" Pod="csi-node-driver-z6l9p" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-csi--node--driver--z6l9p-eth0" Jan 14 13:07:31.703140 containerd[1722]: time="2025-01-14T13:07:31.702367830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:31.703140 containerd[1722]: time="2025-01-14T13:07:31.703053038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:31.703140 containerd[1722]: time="2025-01-14T13:07:31.703069038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:31.703547 containerd[1722]: time="2025-01-14T13:07:31.703160139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:31.728909 systemd[1]: Started cri-containerd-4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00.scope - libcontainer container 4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00. Jan 14 13:07:31.769197 containerd[1722]: time="2025-01-14T13:07:31.769121800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-mfw8m,Uid:1798181d-c5b0-4589-8e4b-80c339c21d34,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00\"" Jan 14 13:07:31.771003 containerd[1722]: time="2025-01-14T13:07:31.770956822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 14 13:07:31.800654 containerd[1722]: time="2025-01-14T13:07:31.800547963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:31.800654 containerd[1722]: time="2025-01-14T13:07:31.800594864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:31.800654 containerd[1722]: time="2025-01-14T13:07:31.800609564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:31.800902 containerd[1722]: time="2025-01-14T13:07:31.800766666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:31.819887 systemd[1]: Started cri-containerd-2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5.scope - libcontainer container 2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5. Jan 14 13:07:31.859615 containerd[1722]: time="2025-01-14T13:07:31.859563145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-txqcv,Uid:400f3e16-4883-45bf-811c-322b770038b8,Namespace:kube-system,Attempt:7,} returns sandbox id \"2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5\"" Jan 14 13:07:31.862533 containerd[1722]: time="2025-01-14T13:07:31.862471078Z" level=info msg="CreateContainer within sandbox \"2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:07:32.058101 containerd[1722]: time="2025-01-14T13:07:32.057995036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:32.058101 containerd[1722]: time="2025-01-14T13:07:32.058047136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:32.058101 containerd[1722]: time="2025-01-14T13:07:32.058061137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:32.058654 containerd[1722]: time="2025-01-14T13:07:32.058148738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:32.075872 systemd[1]: Started cri-containerd-beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b.scope - libcontainer container beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b. Jan 14 13:07:32.098539 containerd[1722]: time="2025-01-14T13:07:32.098496603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6l9p,Uid:a334eebb-fcba-4d16-8280-bef7ba8849b0,Namespace:calico-system,Attempt:7,} returns sandbox id \"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b\"" Jan 14 13:07:32.286096 systemd-networkd[1327]: calia58513850e7: Link UP Jan 14 13:07:32.290746 systemd-networkd[1327]: calia58513850e7: Gained carrier Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.196 [INFO][5894] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.205 [INFO][5894] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0 calico-apiserver-6dcf9d67d5- calico-apiserver f24a6417-91e6-4261-aa71-8c79526d4ae0 693 0 2025-01-14 13:06:56 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dcf9d67d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af calico-apiserver-6dcf9d67d5-qgrsw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia58513850e7 [] []}} ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.206 [INFO][5894] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.228 [INFO][5906] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" HandleID="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.237 [INFO][5906] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" HandleID="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-f264a924af", "pod":"calico-apiserver-6dcf9d67d5-qgrsw", "timestamp":"2025-01-14 13:07:32.228785808 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.237 [INFO][5906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.237 [INFO][5906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.237 [INFO][5906] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.239 [INFO][5906] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.245 [INFO][5906] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.249 [INFO][5906] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.251 [INFO][5906] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.254 [INFO][5906] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.255 [INFO][5906] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.256 [INFO][5906] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486 Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.264 [INFO][5906] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.276 [INFO][5906] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.132/26] block=192.168.29.128/26 handle="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.277 [INFO][5906] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.132/26] handle="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.277 [INFO][5906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:32.315872 containerd[1722]: 2025-01-14 13:07:32.277 [INFO][5906] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.132/26] IPv6=[] ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" HandleID="k8s-pod-network.14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.281 [INFO][5894] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0", GenerateName:"calico-apiserver-6dcf9d67d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f24a6417-91e6-4261-aa71-8c79526d4ae0", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcf9d67d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"calico-apiserver-6dcf9d67d5-qgrsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia58513850e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.281 [INFO][5894] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.132/32] ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.281 [INFO][5894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia58513850e7 ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.288 [INFO][5894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.290 [INFO][5894] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0", GenerateName:"calico-apiserver-6dcf9d67d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f24a6417-91e6-4261-aa71-8c79526d4ae0", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcf9d67d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486", Pod:"calico-apiserver-6dcf9d67d5-qgrsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia58513850e7", MAC:"5e:84:53:2e:00:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:32.317025 containerd[1722]: 2025-01-14 13:07:32.312 [INFO][5894] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486" Namespace="calico-apiserver" Pod="calico-apiserver-6dcf9d67d5-qgrsw" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--apiserver--6dcf9d67d5--qgrsw-eth0" Jan 14 13:07:32.415167 systemd-networkd[1327]: cali8346cc0f00c: Gained IPv6LL Jan 14 13:07:32.603722 kernel: bpftool[6043]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 14 13:07:32.799988 systemd-networkd[1327]: cali3db47eb392f: Gained IPv6LL Jan 14 13:07:33.374905 systemd-networkd[1327]: calia178d44c265: Gained IPv6LL Jan 14 13:07:33.438909 systemd-networkd[1327]: calia58513850e7: Gained IPv6LL Jan 14 13:07:33.760035 systemd-networkd[1327]: vxlan.calico: Link UP Jan 14 13:07:33.760048 systemd-networkd[1327]: vxlan.calico: Gained carrier Jan 14 13:07:34.637948 systemd-networkd[1327]: cali34e61147b99: Link UP Jan 14 13:07:34.638827 systemd-networkd[1327]: cali34e61147b99: Gained carrier Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.567 [INFO][6112] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0 coredns-76f75df574- kube-system 5de7d327-5d34-4f9c-b581-c52c0a00d0b7 694 0 2025-01-14 13:06:49 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af coredns-76f75df574-g846c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34e61147b99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.567 [INFO][6112] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.595 [INFO][6126] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" HandleID="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.602 [INFO][6126] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" HandleID="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-f264a924af", "pod":"coredns-76f75df574-g846c", "timestamp":"2025-01-14 13:07:34.59577234 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.603 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.603 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.603 [INFO][6126] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.604 [INFO][6126] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.608 [INFO][6126] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.612 [INFO][6126] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.614 [INFO][6126] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.616 [INFO][6126] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.616 [INFO][6126] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.617 [INFO][6126] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41 Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.626 [INFO][6126] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.633 [INFO][6126] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.133/26] block=192.168.29.128/26 handle="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.633 [INFO][6126] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.133/26] handle="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.633 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:34.659587 containerd[1722]: 2025-01-14 13:07:34.633 [INFO][6126] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.133/26] IPv6=[] ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" HandleID="k8s-pod-network.9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Workload="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.635 [INFO][6112] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5de7d327-5d34-4f9c-b581-c52c0a00d0b7", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"coredns-76f75df574-g846c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34e61147b99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.635 [INFO][6112] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.133/32] ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.635 [INFO][6112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34e61147b99 ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.637 [INFO][6112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.637 [INFO][6112] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5de7d327-5d34-4f9c-b581-c52c0a00d0b7", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41", Pod:"coredns-76f75df574-g846c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34e61147b99", MAC:"a2:f1:18:16:fd:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:34.663242 containerd[1722]: 2025-01-14 13:07:34.654 [INFO][6112] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41" Namespace="kube-system" Pod="coredns-76f75df574-g846c" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-coredns--76f75df574--g846c-eth0" Jan 14 13:07:34.967022 containerd[1722]: time="2025-01-14T13:07:34.964487159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:34.967022 containerd[1722]: time="2025-01-14T13:07:34.964550660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:34.967022 containerd[1722]: time="2025-01-14T13:07:34.964581460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:34.967022 containerd[1722]: time="2025-01-14T13:07:34.964679961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:35.004833 systemd[1]: Started cri-containerd-14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486.scope - libcontainer container 14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486. Jan 14 13:07:35.044611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439264843.mount: Deactivated successfully. Jan 14 13:07:35.053425 containerd[1722]: time="2025-01-14T13:07:35.052998148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:35.053425 containerd[1722]: time="2025-01-14T13:07:35.053064149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:35.053425 containerd[1722]: time="2025-01-14T13:07:35.053084649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:35.053425 containerd[1722]: time="2025-01-14T13:07:35.053166050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:35.080109 containerd[1722]: time="2025-01-14T13:07:35.079550444Z" level=info msg="CreateContainer within sandbox \"2b2ebbb1cfc28f16748d432e39b3bc6b86b213f681661c95662b2c761da348b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e616e0dea522fd4f894493cd431234099139f5c4b2f8e0781d5812c83c343be\"" Jan 14 13:07:35.081519 containerd[1722]: time="2025-01-14T13:07:35.081097362Z" level=info msg="StartContainer for \"9e616e0dea522fd4f894493cd431234099139f5c4b2f8e0781d5812c83c343be\"" Jan 14 13:07:35.102870 containerd[1722]: time="2025-01-14T13:07:35.102771304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcf9d67d5-qgrsw,Uid:f24a6417-91e6-4261-aa71-8c79526d4ae0,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486\"" Jan 14 13:07:35.110372 systemd[1]: Started cri-containerd-9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41.scope - libcontainer container 9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41. Jan 14 13:07:35.151895 systemd[1]: Started cri-containerd-9e616e0dea522fd4f894493cd431234099139f5c4b2f8e0781d5812c83c343be.scope - libcontainer container 9e616e0dea522fd4f894493cd431234099139f5c4b2f8e0781d5812c83c343be. Jan 14 13:07:35.210153 containerd[1722]: time="2025-01-14T13:07:35.209915701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g846c,Uid:5de7d327-5d34-4f9c-b581-c52c0a00d0b7,Namespace:kube-system,Attempt:8,} returns sandbox id \"9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41\"" Jan 14 13:07:35.223201 containerd[1722]: time="2025-01-14T13:07:35.223070548Z" level=info msg="CreateContainer within sandbox \"9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:07:35.226411 containerd[1722]: time="2025-01-14T13:07:35.226226383Z" level=info msg="StartContainer for \"9e616e0dea522fd4f894493cd431234099139f5c4b2f8e0781d5812c83c343be\" returns successfully" Jan 14 13:07:35.230875 systemd-networkd[1327]: vxlan.calico: Gained IPv6LL Jan 14 13:07:35.265947 systemd-networkd[1327]: cali3d8bbbba246: Link UP Jan 14 13:07:35.266822 systemd-networkd[1327]: cali3d8bbbba246: Gained carrier Jan 14 13:07:35.291761 containerd[1722]: time="2025-01-14T13:07:35.291682014Z" level=info msg="CreateContainer within sandbox \"9a3648a3dfc4731921be652efcf5fe71a56d7ed846d9ef974aaeceb594586b41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5017102fb0e49f944e5beed3d5f5830a5073a81b37490ab9998b7798fe66d83a\"" Jan 14 13:07:35.292873 containerd[1722]: time="2025-01-14T13:07:35.292645225Z" level=info msg="StartContainer for \"5017102fb0e49f944e5beed3d5f5830a5073a81b37490ab9998b7798fe66d83a\"" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.121 [INFO][6184] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0 calico-kube-controllers-fddf9dc45- calico-system 1b81bba4-2ff3-462c-8b85-47035070eff8 691 0 2025-01-14 13:06:56 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fddf9dc45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-f264a924af calico-kube-controllers-fddf9dc45-phpck eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3d8bbbba246 [] []}} ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.121 [INFO][6184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.187 [INFO][6253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" HandleID="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.208 [INFO][6253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" HandleID="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040ae60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-f264a924af", "pod":"calico-kube-controllers-fddf9dc45-phpck", "timestamp":"2025-01-14 13:07:35.187133546 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f264a924af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.208 [INFO][6253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.209 [INFO][6253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.209 [INFO][6253] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f264a924af' Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.212 [INFO][6253] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.220 [INFO][6253] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.230 [INFO][6253] ipam/ipam.go 489: Trying affinity for 192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.233 [INFO][6253] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.236 [INFO][6253] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.237 [INFO][6253] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.238 [INFO][6253] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209 Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.244 [INFO][6253] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.254 [INFO][6253] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.134/26] block=192.168.29.128/26 handle="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.257 [INFO][6253] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.134/26] handle="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" host="ci-4186.1.0-a-f264a924af" Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.257 [INFO][6253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:07:35.300730 containerd[1722]: 2025-01-14 13:07:35.257 [INFO][6253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.134/26] IPv6=[] ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" HandleID="k8s-pod-network.c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Workload="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.259 [INFO][6184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0", GenerateName:"calico-kube-controllers-fddf9dc45-", Namespace:"calico-system", SelfLink:"", UID:"1b81bba4-2ff3-462c-8b85-47035070eff8", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddf9dc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"", Pod:"calico-kube-controllers-fddf9dc45-phpck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d8bbbba246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.259 [INFO][6184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.134/32] ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.260 [INFO][6184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d8bbbba246 ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.266 [INFO][6184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.267 [INFO][6184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0", GenerateName:"calico-kube-controllers-fddf9dc45-", Namespace:"calico-system", SelfLink:"", UID:"1b81bba4-2ff3-462c-8b85-47035070eff8", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 6, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddf9dc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f264a924af", ContainerID:"c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209", Pod:"calico-kube-controllers-fddf9dc45-phpck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d8bbbba246", MAC:"a2:94:46:33:28:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:07:35.301627 containerd[1722]: 2025-01-14 13:07:35.295 [INFO][6184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209" Namespace="calico-system" Pod="calico-kube-controllers-fddf9dc45-phpck" WorkloadEndpoint="ci--4186.1.0--a--f264a924af-k8s-calico--kube--controllers--fddf9dc45--phpck-eth0" Jan 14 13:07:35.337065 systemd[1]: Started cri-containerd-5017102fb0e49f944e5beed3d5f5830a5073a81b37490ab9998b7798fe66d83a.scope - libcontainer container 5017102fb0e49f944e5beed3d5f5830a5073a81b37490ab9998b7798fe66d83a. Jan 14 13:07:35.353597 containerd[1722]: time="2025-01-14T13:07:35.353326503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:35.354419 containerd[1722]: time="2025-01-14T13:07:35.354240913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:35.354419 containerd[1722]: time="2025-01-14T13:07:35.354268913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:35.354419 containerd[1722]: time="2025-01-14T13:07:35.354360614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:35.380328 systemd[1]: Started cri-containerd-c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209.scope - libcontainer container c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209. Jan 14 13:07:35.389518 containerd[1722]: time="2025-01-14T13:07:35.389471707Z" level=info msg="StartContainer for \"5017102fb0e49f944e5beed3d5f5830a5073a81b37490ab9998b7798fe66d83a\" returns successfully" Jan 14 13:07:35.443759 containerd[1722]: time="2025-01-14T13:07:35.443719613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddf9dc45-phpck,Uid:1b81bba4-2ff3-462c-8b85-47035070eff8,Namespace:calico-system,Attempt:8,} returns sandbox id \"c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209\"" Jan 14 13:07:36.011541 kubelet[3426]: I0114 13:07:36.011495 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-txqcv" podStartSLOduration=47.011448555 podStartE2EDuration="47.011448555s" podCreationTimestamp="2025-01-14 13:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:36.010903449 +0000 UTC m=+57.729694094" watchObservedRunningTime="2025-01-14 13:07:36.011448555 +0000 UTC m=+57.730239200" Jan 14 13:07:36.638891 systemd-networkd[1327]: cali34e61147b99: Gained IPv6LL Jan 14 13:07:37.022848 systemd-networkd[1327]: cali3d8bbbba246: Gained IPv6LL Jan 14 13:07:37.891384 kubelet[3426]: I0114 13:07:37.891284 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-g846c" podStartSLOduration=48.890399945 podStartE2EDuration="48.890399945s" podCreationTimestamp="2025-01-14 13:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:36.049333778 +0000 UTC m=+57.768124523" watchObservedRunningTime="2025-01-14 13:07:37.890399945 +0000 UTC m=+59.609190590" Jan 14 13:07:38.394015 containerd[1722]: time="2025-01-14T13:07:38.393975271Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:38.396084 containerd[1722]: time="2025-01-14T13:07:38.394854680Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:38.396084 containerd[1722]: time="2025-01-14T13:07:38.394879481Z" level=info msg="StopPodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:38.396084 containerd[1722]: time="2025-01-14T13:07:38.395349886Z" level=info msg="RemovePodSandbox for \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:38.396084 containerd[1722]: time="2025-01-14T13:07:38.395378086Z" level=info msg="Forcibly stopping sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\"" Jan 14 13:07:38.396084 containerd[1722]: time="2025-01-14T13:07:38.395462387Z" level=info msg="TearDown network for sandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" successfully" Jan 14 13:07:38.536706 containerd[1722]: time="2025-01-14T13:07:38.536643764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:38.536879 containerd[1722]: time="2025-01-14T13:07:38.536740965Z" level=info msg="RemovePodSandbox \"893f4fddea79893fc7ead239addcf327dec9b7f6b3a79f8709bee2344459bcf6\" returns successfully" Jan 14 13:07:38.537820 containerd[1722]: time="2025-01-14T13:07:38.537783977Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:38.537931 containerd[1722]: time="2025-01-14T13:07:38.537907479Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:38.537931 containerd[1722]: time="2025-01-14T13:07:38.537922479Z" level=info msg="StopPodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:38.538842 containerd[1722]: time="2025-01-14T13:07:38.538812089Z" level=info msg="RemovePodSandbox for \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:38.538932 containerd[1722]: time="2025-01-14T13:07:38.538847289Z" level=info msg="Forcibly stopping sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\"" Jan 14 13:07:38.538979 containerd[1722]: time="2025-01-14T13:07:38.538943190Z" level=info msg="TearDown network for sandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" successfully" Jan 14 13:07:38.634004 containerd[1722]: time="2025-01-14T13:07:38.633956551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:38.634400 containerd[1722]: time="2025-01-14T13:07:38.634350356Z" level=info msg="RemovePodSandbox \"88c626d4109678424efb391f9736396c02deaedb98f9bb64ccccb12ee3418592\" returns successfully" Jan 14 13:07:38.635267 containerd[1722]: time="2025-01-14T13:07:38.635230266Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:38.636159 containerd[1722]: time="2025-01-14T13:07:38.635756172Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:38.636159 containerd[1722]: time="2025-01-14T13:07:38.636064075Z" level=info msg="StopPodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:38.636784 containerd[1722]: time="2025-01-14T13:07:38.636727282Z" level=info msg="RemovePodSandbox for \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:38.636784 containerd[1722]: time="2025-01-14T13:07:38.636759783Z" level=info msg="Forcibly stopping sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\"" Jan 14 13:07:38.637456 containerd[1722]: time="2025-01-14T13:07:38.637191188Z" level=info msg="TearDown network for sandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" successfully" Jan 14 13:07:38.744925 containerd[1722]: time="2025-01-14T13:07:38.744387285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:38.744925 containerd[1722]: time="2025-01-14T13:07:38.744465086Z" level=info msg="RemovePodSandbox \"88b084b44d24b45079d1cbf84ea5f501f0914456c42ad4680f30ca2711366513\" returns successfully" Jan 14 13:07:38.745625 containerd[1722]: time="2025-01-14T13:07:38.745590599Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:38.745758 containerd[1722]: time="2025-01-14T13:07:38.745738500Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:38.745827 containerd[1722]: time="2025-01-14T13:07:38.745771801Z" level=info msg="StopPodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:38.746446 containerd[1722]: time="2025-01-14T13:07:38.746422408Z" level=info msg="RemovePodSandbox for \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:38.746563 containerd[1722]: time="2025-01-14T13:07:38.746546109Z" level=info msg="Forcibly stopping sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\"" Jan 14 13:07:38.747755 containerd[1722]: time="2025-01-14T13:07:38.746743611Z" level=info msg="TearDown network for sandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" successfully" Jan 14 13:07:38.839367 containerd[1722]: time="2025-01-14T13:07:38.839318646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:38.841705 containerd[1722]: time="2025-01-14T13:07:38.841653172Z" level=info msg="RemovePodSandbox \"72cd4e38d3ce7e8ec5378fd9896917ad4d1d28e1eec241e6b5afd43c586ab447\" returns successfully" Jan 14 13:07:38.842536 containerd[1722]: time="2025-01-14T13:07:38.842497081Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:38.842631 containerd[1722]: time="2025-01-14T13:07:38.842619382Z" level=info msg="TearDown network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" successfully" Jan 14 13:07:38.842676 containerd[1722]: time="2025-01-14T13:07:38.842635883Z" level=info msg="StopPodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" returns successfully" Jan 14 13:07:38.843605 containerd[1722]: time="2025-01-14T13:07:38.843578693Z" level=info msg="RemovePodSandbox for \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:38.843855 containerd[1722]: time="2025-01-14T13:07:38.843616194Z" level=info msg="Forcibly stopping sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\"" Jan 14 13:07:38.844166 containerd[1722]: time="2025-01-14T13:07:38.844111699Z" level=info msg="TearDown network for sandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" successfully" Jan 14 13:07:38.978262 containerd[1722]: time="2025-01-14T13:07:38.978209497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:38.978470 containerd[1722]: time="2025-01-14T13:07:38.978303398Z" level=info msg="RemovePodSandbox \"94e0e1ce018f8108c74ef47630b5e04cde0c85457092c9713186995c94b6bd90\" returns successfully" Jan 14 13:07:38.979114 containerd[1722]: time="2025-01-14T13:07:38.978992906Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" Jan 14 13:07:38.979219 containerd[1722]: time="2025-01-14T13:07:38.979206508Z" level=info msg="TearDown network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" successfully" Jan 14 13:07:38.979264 containerd[1722]: time="2025-01-14T13:07:38.979221709Z" level=info msg="StopPodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" returns successfully" Jan 14 13:07:38.979946 containerd[1722]: time="2025-01-14T13:07:38.979919216Z" level=info msg="RemovePodSandbox for \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" Jan 14 13:07:38.980047 containerd[1722]: time="2025-01-14T13:07:38.979951017Z" level=info msg="Forcibly stopping sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\"" Jan 14 13:07:38.980091 containerd[1722]: time="2025-01-14T13:07:38.980022117Z" level=info msg="TearDown network for sandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" successfully" Jan 14 13:07:39.328592 containerd[1722]: time="2025-01-14T13:07:39.328535011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:39.328976 containerd[1722]: time="2025-01-14T13:07:39.328925715Z" level=info msg="RemovePodSandbox \"2912291460a1905d64e05db03ee7ae9536b85a7955bef805ad3575c24efe2d4d\" returns successfully" Jan 14 13:07:39.329738 containerd[1722]: time="2025-01-14T13:07:39.329665923Z" level=info msg="StopPodSandbox for \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\"" Jan 14 13:07:39.329903 containerd[1722]: time="2025-01-14T13:07:39.329871626Z" level=info msg="TearDown network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" successfully" Jan 14 13:07:39.329998 containerd[1722]: time="2025-01-14T13:07:39.329897726Z" level=info msg="StopPodSandbox for \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" returns successfully" Jan 14 13:07:39.330473 containerd[1722]: time="2025-01-14T13:07:39.330360431Z" level=info msg="RemovePodSandbox for \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\"" Jan 14 13:07:39.330473 containerd[1722]: time="2025-01-14T13:07:39.330396832Z" level=info msg="Forcibly stopping sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\"" Jan 14 13:07:39.330660 containerd[1722]: time="2025-01-14T13:07:39.330503733Z" level=info msg="TearDown network for sandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" successfully" Jan 14 13:07:39.585031 containerd[1722]: time="2025-01-14T13:07:39.584886275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:39.585031 containerd[1722]: time="2025-01-14T13:07:39.584969175Z" level=info msg="RemovePodSandbox \"8cce967f659268eb868952c66093b43518967abed3a8674aa0e65752e390c44d\" returns successfully" Jan 14 13:07:39.585940 containerd[1722]: time="2025-01-14T13:07:39.585914786Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:39.586059 containerd[1722]: time="2025-01-14T13:07:39.586033887Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:39.586059 containerd[1722]: time="2025-01-14T13:07:39.586056088Z" level=info msg="StopPodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:39.586369 containerd[1722]: time="2025-01-14T13:07:39.586341891Z" level=info msg="RemovePodSandbox for \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:39.586369 containerd[1722]: time="2025-01-14T13:07:39.586368391Z" level=info msg="Forcibly stopping sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\"" Jan 14 13:07:39.586494 containerd[1722]: time="2025-01-14T13:07:39.586441392Z" level=info msg="TearDown network for sandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" successfully" Jan 14 13:07:39.785009 containerd[1722]: time="2025-01-14T13:07:39.784933109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:39.785274 containerd[1722]: time="2025-01-14T13:07:39.785019310Z" level=info msg="RemovePodSandbox \"7e187c6e7e4cc68e5e1c5815dcb658fd22828653a1d0debfecdc00c70b25a125\" returns successfully" Jan 14 13:07:39.785745 containerd[1722]: time="2025-01-14T13:07:39.785707018Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:39.785884 containerd[1722]: time="2025-01-14T13:07:39.785840419Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:39.785884 containerd[1722]: time="2025-01-14T13:07:39.785858720Z" level=info msg="StopPodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:39.786580 containerd[1722]: time="2025-01-14T13:07:39.786219224Z" level=info msg="RemovePodSandbox for \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:39.786580 containerd[1722]: time="2025-01-14T13:07:39.786253224Z" level=info msg="Forcibly stopping sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\"" Jan 14 13:07:39.786580 containerd[1722]: time="2025-01-14T13:07:39.786346125Z" level=info msg="TearDown network for sandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" successfully" Jan 14 13:07:40.089552 containerd[1722]: time="2025-01-14T13:07:40.089345810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:40.089552 containerd[1722]: time="2025-01-14T13:07:40.089427611Z" level=info msg="RemovePodSandbox \"440fbb0ce30782422493cdb8b188c05607cb5acb70cf0ad297f9672a1e2a678a\" returns successfully" Jan 14 13:07:40.090661 containerd[1722]: time="2025-01-14T13:07:40.090272420Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:40.090661 containerd[1722]: time="2025-01-14T13:07:40.090387322Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:40.090661 containerd[1722]: time="2025-01-14T13:07:40.090401322Z" level=info msg="StopPodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:40.091214 containerd[1722]: time="2025-01-14T13:07:40.091008028Z" level=info msg="RemovePodSandbox for \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:40.091214 containerd[1722]: time="2025-01-14T13:07:40.091035829Z" level=info msg="Forcibly stopping sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\"" Jan 14 13:07:40.091214 containerd[1722]: time="2025-01-14T13:07:40.091134030Z" level=info msg="TearDown network for sandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" successfully" Jan 14 13:07:40.330766 containerd[1722]: time="2025-01-14T13:07:40.330637805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:40.330956 containerd[1722]: time="2025-01-14T13:07:40.330788707Z" level=info msg="RemovePodSandbox \"79c81f60212079ee5b42bbb5a19a90a4928cc9cd944acef8cfe4ffbcf453b83a\" returns successfully" Jan 14 13:07:40.331641 containerd[1722]: time="2025-01-14T13:07:40.331566116Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:40.331846 containerd[1722]: time="2025-01-14T13:07:40.331762518Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:40.331846 containerd[1722]: time="2025-01-14T13:07:40.331785018Z" level=info msg="StopPodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:40.332436 containerd[1722]: time="2025-01-14T13:07:40.332304524Z" level=info msg="RemovePodSandbox for \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:40.332436 containerd[1722]: time="2025-01-14T13:07:40.332356925Z" level=info msg="Forcibly stopping sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\"" Jan 14 13:07:40.332617 containerd[1722]: time="2025-01-14T13:07:40.332478026Z" level=info msg="TearDown network for sandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" successfully" Jan 14 13:07:40.481942 containerd[1722]: time="2025-01-14T13:07:40.481879095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:40.583406 containerd[1722]: time="2025-01-14T13:07:40.583138126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 14 13:07:40.734812 containerd[1722]: time="2025-01-14T13:07:40.734633919Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:40.877628 containerd[1722]: time="2025-01-14T13:07:40.877470714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:40.877628 containerd[1722]: time="2025-01-14T13:07:40.877570915Z" level=info msg="RemovePodSandbox \"32e06375fc3a0508b7216274ae1e379f5d85e3c78e0794ab50dfbdd9b538e2d1\" returns successfully" Jan 14 13:07:40.878297 containerd[1722]: time="2025-01-14T13:07:40.878157322Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:40.878448 containerd[1722]: time="2025-01-14T13:07:40.878302124Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:40.878448 containerd[1722]: time="2025-01-14T13:07:40.878321224Z" level=info msg="StopPodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:40.878760 containerd[1722]: time="2025-01-14T13:07:40.878719028Z" level=info msg="RemovePodSandbox for \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:40.878969 containerd[1722]: time="2025-01-14T13:07:40.878759129Z" level=info msg="Forcibly stopping sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\"" Jan 14 13:07:40.878969 containerd[1722]: time="2025-01-14T13:07:40.878864530Z" level=info msg="TearDown network for sandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" successfully" Jan 14 13:07:40.937357 containerd[1722]: time="2025-01-14T13:07:40.937259682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:40.938909 containerd[1722]: time="2025-01-14T13:07:40.938087291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 9.166885567s" Jan 14 13:07:40.938909 containerd[1722]: time="2025-01-14T13:07:40.938140392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 14 13:07:40.940037 containerd[1722]: time="2025-01-14T13:07:40.939625709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 14 13:07:40.941258 containerd[1722]: time="2025-01-14T13:07:40.941205126Z" level=info msg="CreateContainer within sandbox \"4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 14 13:07:41.032551 containerd[1722]: time="2025-01-14T13:07:41.032366945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:41.032551 containerd[1722]: time="2025-01-14T13:07:41.032430045Z" level=info msg="RemovePodSandbox \"13845cccc907072d9a3dde8191eb90e06c9149f5c4d264c24c0a7a59a899b2ba\" returns successfully" Jan 14 13:07:41.033200 containerd[1722]: time="2025-01-14T13:07:41.032999052Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:41.033200 containerd[1722]: time="2025-01-14T13:07:41.033106253Z" level=info msg="TearDown network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" successfully" Jan 14 13:07:41.033200 containerd[1722]: time="2025-01-14T13:07:41.033121753Z" level=info msg="StopPodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" returns successfully" Jan 14 13:07:41.033573 containerd[1722]: time="2025-01-14T13:07:41.033495057Z" level=info msg="RemovePodSandbox for \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:41.033573 containerd[1722]: time="2025-01-14T13:07:41.033520357Z" level=info msg="Forcibly stopping sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\"" Jan 14 13:07:41.033736 containerd[1722]: time="2025-01-14T13:07:41.033601258Z" level=info msg="TearDown network for sandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" successfully" Jan 14 13:07:41.334278 containerd[1722]: time="2025-01-14T13:07:41.333544209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:41.334278 containerd[1722]: time="2025-01-14T13:07:41.333642310Z" level=info msg="RemovePodSandbox \"6a68ae0c277724059b558d49be5ce8197586101aee6924426b87b461cffd3cf0\" returns successfully" Jan 14 13:07:41.334278 containerd[1722]: time="2025-01-14T13:07:41.334215317Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" Jan 14 13:07:41.334537 containerd[1722]: time="2025-01-14T13:07:41.334346718Z" level=info msg="TearDown network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" successfully" Jan 14 13:07:41.334537 containerd[1722]: time="2025-01-14T13:07:41.334365418Z" level=info msg="StopPodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" returns successfully" Jan 14 13:07:41.335103 containerd[1722]: time="2025-01-14T13:07:41.334973025Z" level=info msg="RemovePodSandbox for \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" Jan 14 13:07:41.335103 containerd[1722]: time="2025-01-14T13:07:41.335024026Z" level=info msg="Forcibly stopping sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\"" Jan 14 13:07:41.335299 containerd[1722]: time="2025-01-14T13:07:41.335141127Z" level=info msg="TearDown network for sandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" successfully" Jan 14 13:07:41.729499 containerd[1722]: time="2025-01-14T13:07:41.729432232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:41.729657 containerd[1722]: time="2025-01-14T13:07:41.729521833Z" level=info msg="RemovePodSandbox \"d3c1a7b6a2bd995bb3555871f8977949489fac992df90d7c4aaf2fa9d8366ece\" returns successfully" Jan 14 13:07:41.731028 containerd[1722]: time="2025-01-14T13:07:41.730793447Z" level=info msg="StopPodSandbox for \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\"" Jan 14 13:07:41.731028 containerd[1722]: time="2025-01-14T13:07:41.730925948Z" level=info msg="TearDown network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" successfully" Jan 14 13:07:41.731028 containerd[1722]: time="2025-01-14T13:07:41.730941149Z" level=info msg="StopPodSandbox for \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" returns successfully" Jan 14 13:07:41.731712 containerd[1722]: time="2025-01-14T13:07:41.731651556Z" level=info msg="RemovePodSandbox for \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\"" Jan 14 13:07:41.731816 containerd[1722]: time="2025-01-14T13:07:41.731717557Z" level=info msg="Forcibly stopping sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\"" Jan 14 13:07:41.731865 containerd[1722]: time="2025-01-14T13:07:41.731820258Z" level=info msg="TearDown network for sandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" successfully" Jan 14 13:07:41.881970 containerd[1722]: time="2025-01-14T13:07:41.881854834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:41.882752 containerd[1722]: time="2025-01-14T13:07:41.882035736Z" level=info msg="RemovePodSandbox \"9bd699c9d53d6f28c9ec2d53824b8986b9edaf06a074c6937d059cf0aca7ea67\" returns successfully" Jan 14 13:07:41.883285 containerd[1722]: time="2025-01-14T13:07:41.883251750Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:41.883402 containerd[1722]: time="2025-01-14T13:07:41.883370651Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:41.883594 containerd[1722]: time="2025-01-14T13:07:41.883389652Z" level=info msg="StopPodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:41.885285 containerd[1722]: time="2025-01-14T13:07:41.884209661Z" level=info msg="RemovePodSandbox for \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:41.885285 containerd[1722]: time="2025-01-14T13:07:41.884240261Z" level=info msg="Forcibly stopping sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\"" Jan 14 13:07:41.885285 containerd[1722]: time="2025-01-14T13:07:41.884325762Z" level=info msg="TearDown network for sandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" successfully" Jan 14 13:07:42.138153 containerd[1722]: time="2025-01-14T13:07:42.137986292Z" level=info msg="CreateContainer within sandbox \"4eae89ca9bae13ae7f7d258b02637a87b0e670b47d6d974abab54d00d8363d00\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b071b2076a9b931409b8f98f32bec829b685d8dd9bc6fb566d3b220eda9a8d56\"" Jan 14 13:07:42.140565 containerd[1722]: time="2025-01-14T13:07:42.139313907Z" level=info msg="StartContainer for \"b071b2076a9b931409b8f98f32bec829b685d8dd9bc6fb566d3b220eda9a8d56\"" Jan 14 13:07:42.188897 systemd[1]: Started cri-containerd-b071b2076a9b931409b8f98f32bec829b685d8dd9bc6fb566d3b220eda9a8d56.scope - libcontainer container b071b2076a9b931409b8f98f32bec829b685d8dd9bc6fb566d3b220eda9a8d56. Jan 14 13:07:42.381868 containerd[1722]: time="2025-01-14T13:07:42.381741601Z" level=info msg="StartContainer for \"b071b2076a9b931409b8f98f32bec829b685d8dd9bc6fb566d3b220eda9a8d56\" returns successfully" Jan 14 13:07:42.699961 containerd[1722]: time="2025-01-14T13:07:42.699720034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:42.699961 containerd[1722]: time="2025-01-14T13:07:42.699806435Z" level=info msg="RemovePodSandbox \"b6f446474d9f820c07fa725e49d60e396da1d0b88a81b61ae43331cd82c53705\" returns successfully" Jan 14 13:07:42.702103 containerd[1722]: time="2025-01-14T13:07:42.701548154Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:42.702103 containerd[1722]: time="2025-01-14T13:07:42.701684756Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:42.702103 containerd[1722]: time="2025-01-14T13:07:42.701723156Z" level=info msg="StopPodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:42.703040 containerd[1722]: time="2025-01-14T13:07:42.702339163Z" level=info msg="RemovePodSandbox for \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:42.703040 containerd[1722]: time="2025-01-14T13:07:42.702366564Z" level=info msg="Forcibly stopping sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\"" Jan 14 13:07:42.703040 containerd[1722]: time="2025-01-14T13:07:42.702447364Z" level=info msg="TearDown network for sandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" successfully" Jan 14 13:07:42.830889 containerd[1722]: time="2025-01-14T13:07:42.830835491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:42.831279 containerd[1722]: time="2025-01-14T13:07:42.831129394Z" level=info msg="RemovePodSandbox \"b04dea5ab958cc42542a865a2bcb96fb2540eb6fd9c0005a49648d8d9f2208fd\" returns successfully" Jan 14 13:07:42.832226 containerd[1722]: time="2025-01-14T13:07:42.832109205Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:42.832750 containerd[1722]: time="2025-01-14T13:07:42.832370208Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:42.832750 containerd[1722]: time="2025-01-14T13:07:42.832390908Z" level=info msg="StopPodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:42.834560 containerd[1722]: time="2025-01-14T13:07:42.833088416Z" level=info msg="RemovePodSandbox for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:42.834560 containerd[1722]: time="2025-01-14T13:07:42.833117516Z" level=info msg="Forcibly stopping sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\"" Jan 14 13:07:42.834560 containerd[1722]: time="2025-01-14T13:07:42.833203517Z" level=info msg="TearDown network for sandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" successfully" Jan 14 13:07:43.037464 containerd[1722]: time="2025-01-14T13:07:43.037320185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:43.037464 containerd[1722]: time="2025-01-14T13:07:43.037395286Z" level=info msg="RemovePodSandbox \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" returns successfully" Jan 14 13:07:43.038453 containerd[1722]: time="2025-01-14T13:07:43.038401297Z" level=error msg="PodSandboxStatus for \"908116a438153b08c1ad34b2a37763b4fa0e4de4ed90745767bd8e5773ebbed3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox: not found" Jan 14 13:07:43.039063 containerd[1722]: time="2025-01-14T13:07:43.039034805Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:43.039164 containerd[1722]: time="2025-01-14T13:07:43.039140106Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:43.039214 containerd[1722]: time="2025-01-14T13:07:43.039166806Z" level=info msg="StopPodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:43.039498 containerd[1722]: time="2025-01-14T13:07:43.039471109Z" level=info msg="RemovePodSandbox for \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:43.039577 containerd[1722]: time="2025-01-14T13:07:43.039497710Z" level=info msg="Forcibly stopping sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\"" Jan 14 13:07:43.039619 containerd[1722]: time="2025-01-14T13:07:43.039577111Z" level=info msg="TearDown network for sandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" successfully" Jan 14 13:07:43.242244 containerd[1722]: time="2025-01-14T13:07:43.241899859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:43.242244 containerd[1722]: time="2025-01-14T13:07:43.241997960Z" level=info msg="RemovePodSandbox \"ac0828b0689ee6162ce030adb64fbc750cf05b7f83444ad9e62e03931b89f6db\" returns successfully" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243087772Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243229473Z" level=info msg="TearDown network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" successfully" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243295874Z" level=info msg="StopPodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" returns successfully" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243713979Z" level=info msg="RemovePodSandbox for \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243745679Z" level=info msg="Forcibly stopping sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\"" Jan 14 13:07:43.244792 containerd[1722]: time="2025-01-14T13:07:43.243845480Z" level=info msg="TearDown network for sandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" successfully" Jan 14 13:07:43.385047 containerd[1722]: time="2025-01-14T13:07:43.384915448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:43.385047 containerd[1722]: time="2025-01-14T13:07:43.385038749Z" level=info msg="RemovePodSandbox \"60dead41b29d4b4e1d3aee837f286f9f0bc00eb66009f78454aa8a9907a9c4ad\" returns successfully" Jan 14 13:07:43.386752 containerd[1722]: time="2025-01-14T13:07:43.386679967Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" Jan 14 13:07:43.387077 containerd[1722]: time="2025-01-14T13:07:43.387042071Z" level=info msg="TearDown network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" successfully" Jan 14 13:07:43.387187 containerd[1722]: time="2025-01-14T13:07:43.387171073Z" level=info msg="StopPodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" returns successfully" Jan 14 13:07:43.387750 containerd[1722]: time="2025-01-14T13:07:43.387660078Z" level=info msg="RemovePodSandbox for \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" Jan 14 13:07:43.388194 containerd[1722]: time="2025-01-14T13:07:43.388090683Z" level=info msg="Forcibly stopping sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\"" Jan 14 13:07:43.430866 containerd[1722]: time="2025-01-14T13:07:43.388358786Z" level=info msg="TearDown network for sandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" successfully" Jan 14 13:07:43.841917 containerd[1722]: time="2025-01-14T13:07:43.841810425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:43.841917 containerd[1722]: time="2025-01-14T13:07:43.841900726Z" level=info msg="RemovePodSandbox \"401053836454992487ddc2d9404209d09e1c7ea8026b8db4b9b0151bacb682a5\" returns successfully" Jan 14 13:07:43.842917 containerd[1722]: time="2025-01-14T13:07:43.842881737Z" level=info msg="StopPodSandbox for \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\"" Jan 14 13:07:43.843079 containerd[1722]: time="2025-01-14T13:07:43.843057639Z" level=info msg="TearDown network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" successfully" Jan 14 13:07:43.843140 containerd[1722]: time="2025-01-14T13:07:43.843080039Z" level=info msg="StopPodSandbox for \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" returns successfully" Jan 14 13:07:43.843938 containerd[1722]: time="2025-01-14T13:07:43.843908448Z" level=info msg="RemovePodSandbox for \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\"" Jan 14 13:07:43.844042 containerd[1722]: time="2025-01-14T13:07:43.843943348Z" level=info msg="Forcibly stopping sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\"" Jan 14 13:07:43.844088 containerd[1722]: time="2025-01-14T13:07:43.844026849Z" level=info msg="TearDown network for sandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" successfully" Jan 14 13:07:44.039850 containerd[1722]: time="2025-01-14T13:07:44.039407020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:44.039850 containerd[1722]: time="2025-01-14T13:07:44.039522122Z" level=info msg="RemovePodSandbox \"10fc8c2c8c72f457e86b6e47bdf25f8046dfa0cdccf3e971a35c42e19bfeeb0e\" returns successfully" Jan 14 13:07:44.041071 containerd[1722]: time="2025-01-14T13:07:44.040587034Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:44.041071 containerd[1722]: time="2025-01-14T13:07:44.040754135Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:44.041071 containerd[1722]: time="2025-01-14T13:07:44.040772736Z" level=info msg="StopPodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:44.041205 containerd[1722]: time="2025-01-14T13:07:44.041128940Z" level=info msg="RemovePodSandbox for \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:44.041205 containerd[1722]: time="2025-01-14T13:07:44.041154740Z" level=info msg="Forcibly stopping sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\"" Jan 14 13:07:44.041278 containerd[1722]: time="2025-01-14T13:07:44.041236541Z" level=info msg="TearDown network for sandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" successfully" Jan 14 13:07:44.046724 kubelet[3426]: I0114 13:07:44.046480 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:07:44.183939 containerd[1722]: time="2025-01-14T13:07:44.183843825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:44.183939 containerd[1722]: time="2025-01-14T13:07:44.183941426Z" level=info msg="RemovePodSandbox \"7035a9f1b9aa1a6e87906e6d8b1b7db71dcfe69461af3d911e6ec6fa5356f0f4\" returns successfully" Jan 14 13:07:44.184685 containerd[1722]: time="2025-01-14T13:07:44.184601834Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:44.184815 containerd[1722]: time="2025-01-14T13:07:44.184766836Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:44.184815 containerd[1722]: time="2025-01-14T13:07:44.184788536Z" level=info msg="StopPodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:44.185864 containerd[1722]: time="2025-01-14T13:07:44.185370042Z" level=info msg="RemovePodSandbox for \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:44.185864 containerd[1722]: time="2025-01-14T13:07:44.185406943Z" level=info msg="Forcibly stopping sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\"" Jan 14 13:07:44.185864 containerd[1722]: time="2025-01-14T13:07:44.185510044Z" level=info msg="TearDown network for sandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" successfully" Jan 14 13:07:44.394228 containerd[1722]: time="2025-01-14T13:07:44.394140562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:44.394404 containerd[1722]: time="2025-01-14T13:07:44.394293664Z" level=info msg="RemovePodSandbox \"868208a41705c4dd3dd7b94c9546d83f888f35b741e42b559544b6b45826b8e6\" returns successfully" Jan 14 13:07:44.395422 containerd[1722]: time="2025-01-14T13:07:44.395036272Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:44.395422 containerd[1722]: time="2025-01-14T13:07:44.395160573Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:44.395422 containerd[1722]: time="2025-01-14T13:07:44.395176174Z" level=info msg="StopPodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:44.395962 containerd[1722]: time="2025-01-14T13:07:44.395817081Z" level=info msg="RemovePodSandbox for \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:44.396120 containerd[1722]: time="2025-01-14T13:07:44.396030783Z" level=info msg="Forcibly stopping sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\"" Jan 14 13:07:44.396330 containerd[1722]: time="2025-01-14T13:07:44.396299086Z" level=info msg="TearDown network for sandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" successfully" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.700721869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.700806770Z" level=info msg="RemovePodSandbox \"a24f92cf1f0adc0d7b0f55a47ae1753573c671ead118eacb5ad9100b3bb4f46e\" returns successfully" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701338276Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701530578Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701547378Z" level=info msg="StopPodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701876782Z" level=info msg="RemovePodSandbox for \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701900982Z" level=info msg="Forcibly stopping sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\"" Jan 14 13:07:44.702162 containerd[1722]: time="2025-01-14T13:07:44.701987483Z" level=info msg="TearDown network for sandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" successfully" Jan 14 13:07:44.835446 containerd[1722]: time="2025-01-14T13:07:44.835372065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:44.835641 containerd[1722]: time="2025-01-14T13:07:44.835477466Z" level=info msg="RemovePodSandbox \"52d895ee0653af5cde877db7668f1d488926a8e86e6acbef9d7d0da837f954b3\" returns successfully" Jan 14 13:07:44.837132 containerd[1722]: time="2025-01-14T13:07:44.836795281Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:44.837132 containerd[1722]: time="2025-01-14T13:07:44.836977983Z" level=info msg="TearDown network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" successfully" Jan 14 13:07:44.837132 containerd[1722]: time="2025-01-14T13:07:44.836998283Z" level=info msg="StopPodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" returns successfully" Jan 14 13:07:44.838724 containerd[1722]: time="2025-01-14T13:07:44.838536100Z" level=info msg="RemovePodSandbox for \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:44.838724 containerd[1722]: time="2025-01-14T13:07:44.838606101Z" level=info msg="Forcibly stopping sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\"" Jan 14 13:07:44.839082 containerd[1722]: time="2025-01-14T13:07:44.838872404Z" level=info msg="TearDown network for sandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" successfully" Jan 14 13:07:45.229754 containerd[1722]: time="2025-01-14T13:07:45.229664946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:45.230285 containerd[1722]: time="2025-01-14T13:07:45.229769947Z" level=info msg="RemovePodSandbox \"00c1f3fc9d5dc7acefc22e5a743249c169b693f66fc943c4c90f59209c0c3c15\" returns successfully" Jan 14 13:07:45.230285 containerd[1722]: time="2025-01-14T13:07:45.230275253Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" Jan 14 13:07:45.230427 containerd[1722]: time="2025-01-14T13:07:45.230393254Z" level=info msg="TearDown network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" successfully" Jan 14 13:07:45.230427 containerd[1722]: time="2025-01-14T13:07:45.230414155Z" level=info msg="StopPodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" returns successfully" Jan 14 13:07:45.230856 containerd[1722]: time="2025-01-14T13:07:45.230794059Z" level=info msg="RemovePodSandbox for \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" Jan 14 13:07:45.230856 containerd[1722]: time="2025-01-14T13:07:45.230823959Z" level=info msg="Forcibly stopping sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\"" Jan 14 13:07:45.231033 containerd[1722]: time="2025-01-14T13:07:45.230909660Z" level=info msg="TearDown network for sandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" successfully" Jan 14 13:07:45.687227 containerd[1722]: time="2025-01-14T13:07:45.686853826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:45.687227 containerd[1722]: time="2025-01-14T13:07:45.686957828Z" level=info msg="RemovePodSandbox \"c909ce86315b760726da338a954f0f8d45b082873c76e307dfd787cca33f111a\" returns successfully" Jan 14 13:07:45.688193 containerd[1722]: time="2025-01-14T13:07:45.687835337Z" level=info msg="StopPodSandbox for \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\"" Jan 14 13:07:45.688193 containerd[1722]: time="2025-01-14T13:07:45.687958539Z" level=info msg="TearDown network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" successfully" Jan 14 13:07:45.688193 containerd[1722]: time="2025-01-14T13:07:45.687973739Z" level=info msg="StopPodSandbox for \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" returns successfully" Jan 14 13:07:45.688614 containerd[1722]: time="2025-01-14T13:07:45.688592346Z" level=info msg="RemovePodSandbox for \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\"" Jan 14 13:07:45.689024 containerd[1722]: time="2025-01-14T13:07:45.688800748Z" level=info msg="Forcibly stopping sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\"" Jan 14 13:07:45.689024 containerd[1722]: time="2025-01-14T13:07:45.688900949Z" level=info msg="TearDown network for sandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" successfully" Jan 14 13:07:45.845094 containerd[1722]: time="2025-01-14T13:07:45.845044884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:45.845567 containerd[1722]: time="2025-01-14T13:07:45.845355388Z" level=info msg="RemovePodSandbox \"9943b6e7851c1e6fc0f6523f4781b2799e23ba02f01a6361fb6b532e086b79d5\" returns successfully" Jan 14 13:07:45.847394 containerd[1722]: time="2025-01-14T13:07:45.847219408Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:45.847394 containerd[1722]: time="2025-01-14T13:07:45.847353110Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:45.847394 containerd[1722]: time="2025-01-14T13:07:45.847367710Z" level=info msg="StopPodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:45.849307 containerd[1722]: time="2025-01-14T13:07:45.848831926Z" level=info msg="RemovePodSandbox for \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:45.849307 containerd[1722]: time="2025-01-14T13:07:45.848957128Z" level=info msg="Forcibly stopping sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\"" Jan 14 13:07:45.849423 containerd[1722]: time="2025-01-14T13:07:45.849093829Z" level=info msg="TearDown network for sandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" successfully" Jan 14 13:07:45.931041 containerd[1722]: time="2025-01-14T13:07:45.930813937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:45.931041 containerd[1722]: time="2025-01-14T13:07:45.930898238Z" level=info msg="RemovePodSandbox \"6549a216dd2bb5478561a03000a5a674137edc074fd7f42e0c02e8a7b0abe1fa\" returns successfully" Jan 14 13:07:45.931679 containerd[1722]: time="2025-01-14T13:07:45.931600746Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:45.931862 containerd[1722]: time="2025-01-14T13:07:45.931745348Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:45.931862 containerd[1722]: time="2025-01-14T13:07:45.931762648Z" level=info msg="StopPodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:45.932307 containerd[1722]: time="2025-01-14T13:07:45.932274654Z" level=info msg="RemovePodSandbox for \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:45.932403 containerd[1722]: time="2025-01-14T13:07:45.932309554Z" level=info msg="Forcibly stopping sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\"" Jan 14 13:07:45.932458 containerd[1722]: time="2025-01-14T13:07:45.932397255Z" level=info msg="TearDown network for sandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" successfully" Jan 14 13:07:46.184929 containerd[1722]: time="2025-01-14T13:07:46.184775859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:46.184929 containerd[1722]: time="2025-01-14T13:07:46.184881160Z" level=info msg="RemovePodSandbox \"2656ff18f848fc6384b32b1837303d191e663eb66b352f55f37bba2cfe94c898\" returns successfully" Jan 14 13:07:46.185871 containerd[1722]: time="2025-01-14T13:07:46.185506567Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:46.185871 containerd[1722]: time="2025-01-14T13:07:46.185650369Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:46.185871 containerd[1722]: time="2025-01-14T13:07:46.185669469Z" level=info msg="StopPodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:46.186643 containerd[1722]: time="2025-01-14T13:07:46.186507578Z" level=info msg="RemovePodSandbox for \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:46.186643 containerd[1722]: time="2025-01-14T13:07:46.186544879Z" level=info msg="Forcibly stopping sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\"" Jan 14 13:07:46.186939 containerd[1722]: time="2025-01-14T13:07:46.186667180Z" level=info msg="TearDown network for sandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" successfully" Jan 14 13:07:46.581661 containerd[1722]: time="2025-01-14T13:07:46.581514868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:46.631035 containerd[1722]: time="2025-01-14T13:07:46.630960317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 14 13:07:46.787588 containerd[1722]: time="2025-01-14T13:07:46.787016451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:46.787588 containerd[1722]: time="2025-01-14T13:07:46.787129952Z" level=info msg="RemovePodSandbox \"297e0ea69f4196284bd7ed7a4cd6c4df897a462f539ff04ce20e531866919e00\" returns successfully" Jan 14 13:07:46.788090 containerd[1722]: time="2025-01-14T13:07:46.788059363Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:46.788228 containerd[1722]: time="2025-01-14T13:07:46.788194364Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:46.788228 containerd[1722]: time="2025-01-14T13:07:46.788229365Z" level=info msg="StopPodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:46.788699 containerd[1722]: time="2025-01-14T13:07:46.788609969Z" level=info msg="RemovePodSandbox for \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:46.788699 containerd[1722]: time="2025-01-14T13:07:46.788641269Z" level=info msg="Forcibly stopping sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\"" Jan 14 13:07:46.788860 containerd[1722]: time="2025-01-14T13:07:46.788760971Z" level=info msg="TearDown network for sandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" successfully" Jan 14 13:07:46.793471 containerd[1722]: time="2025-01-14T13:07:46.793412022Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:46.928923 containerd[1722]: time="2025-01-14T13:07:46.928868327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:46.929167 containerd[1722]: time="2025-01-14T13:07:46.928996429Z" level=info msg="RemovePodSandbox \"c5f8c1d11edebee3180699a40444b924d7320ba770af35aeb1ed34f1aff31a60\" returns successfully" Jan 14 13:07:46.929604 containerd[1722]: time="2025-01-14T13:07:46.929572735Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:46.929805 containerd[1722]: time="2025-01-14T13:07:46.929732437Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:46.929805 containerd[1722]: time="2025-01-14T13:07:46.929799438Z" level=info msg="StopPodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:46.930279 containerd[1722]: time="2025-01-14T13:07:46.930240243Z" level=info msg="RemovePodSandbox for \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:46.930279 containerd[1722]: time="2025-01-14T13:07:46.930273243Z" level=info msg="Forcibly stopping sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\"" Jan 14 13:07:46.930414 containerd[1722]: time="2025-01-14T13:07:46.930358944Z" level=info msg="TearDown network for sandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" successfully" Jan 14 13:07:46.987360 containerd[1722]: time="2025-01-14T13:07:46.987210576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:46.989167 containerd[1722]: time="2025-01-14T13:07:46.988932095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 6.049260486s" Jan 14 13:07:46.989167 containerd[1722]: time="2025-01-14T13:07:46.989012696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 14 13:07:46.991263 containerd[1722]: time="2025-01-14T13:07:46.991207120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 14 13:07:46.993068 containerd[1722]: time="2025-01-14T13:07:46.993033540Z" level=info msg="CreateContainer within sandbox \"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 14 13:07:47.230405 containerd[1722]: time="2025-01-14T13:07:47.230210776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:47.230405 containerd[1722]: time="2025-01-14T13:07:47.230300177Z" level=info msg="RemovePodSandbox \"dad4ef307a2cf5ff8a735c95b2af7b3799040c15b15d42cb5d1b38f97c7f8254\" returns successfully" Jan 14 13:07:47.231316 containerd[1722]: time="2025-01-14T13:07:47.231278288Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:47.231621 containerd[1722]: time="2025-01-14T13:07:47.231417889Z" level=info msg="TearDown network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" successfully" Jan 14 13:07:47.231621 containerd[1722]: time="2025-01-14T13:07:47.231444190Z" level=info msg="StopPodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" returns successfully" Jan 14 13:07:47.232233 containerd[1722]: time="2025-01-14T13:07:47.232176498Z" level=info msg="RemovePodSandbox for \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:47.232337 containerd[1722]: time="2025-01-14T13:07:47.232237598Z" level=info msg="Forcibly stopping sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\"" Jan 14 13:07:47.232434 containerd[1722]: time="2025-01-14T13:07:47.232372600Z" level=info msg="TearDown network for sandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" successfully" Jan 14 13:07:47.388609 containerd[1722]: time="2025-01-14T13:07:47.388253232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:47.388609 containerd[1722]: time="2025-01-14T13:07:47.388364333Z" level=info msg="RemovePodSandbox \"319248c31c9422d2db9d6ed743c61c2664efa55516f1b78a789d69a120a52eb5\" returns successfully" Jan 14 13:07:47.390462 containerd[1722]: time="2025-01-14T13:07:47.390177253Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" Jan 14 13:07:47.390462 containerd[1722]: time="2025-01-14T13:07:47.390362255Z" level=info msg="TearDown network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" successfully" Jan 14 13:07:47.390462 containerd[1722]: time="2025-01-14T13:07:47.390380856Z" level=info msg="StopPodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" returns successfully" Jan 14 13:07:47.391503 containerd[1722]: time="2025-01-14T13:07:47.391448568Z" level=info msg="RemovePodSandbox for \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" Jan 14 13:07:47.391751 containerd[1722]: time="2025-01-14T13:07:47.391483068Z" level=info msg="Forcibly stopping sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\"" Jan 14 13:07:47.392183 containerd[1722]: time="2025-01-14T13:07:47.391896672Z" level=info msg="TearDown network for sandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" successfully" Jan 14 13:07:47.446201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189431713.mount: Deactivated successfully. Jan 14 13:07:47.687918 containerd[1722]: time="2025-01-14T13:07:47.687855261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.687951162Z" level=info msg="RemovePodSandbox \"02b7fab916c172c3f9b83ee7e5f956a1036020c5895e1bce14b05c42ff0fd02e\" returns successfully" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689056674Z" level=info msg="StopPodSandbox for \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\"" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689200376Z" level=info msg="TearDown network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" successfully" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689261677Z" level=info msg="StopPodSandbox for \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" returns successfully" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689672781Z" level=info msg="RemovePodSandbox for \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\"" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689722182Z" level=info msg="Forcibly stopping sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\"" Jan 14 13:07:47.733019 containerd[1722]: time="2025-01-14T13:07:47.689803883Z" level=info msg="TearDown network for sandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" successfully" Jan 14 13:07:47.734865 containerd[1722]: time="2025-01-14T13:07:47.734808083Z" level=info msg="CreateContainer within sandbox \"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eb2fb3760fece18f1a92bd8d0de76d10c789a079a1f0ba86d5292c338d97bdef\"" Jan 14 13:07:47.736054 containerd[1722]: time="2025-01-14T13:07:47.735469190Z" level=info msg="StartContainer for \"eb2fb3760fece18f1a92bd8d0de76d10c789a079a1f0ba86d5292c338d97bdef\"" Jan 14 13:07:47.777881 systemd[1]: Started cri-containerd-eb2fb3760fece18f1a92bd8d0de76d10c789a079a1f0ba86d5292c338d97bdef.scope - libcontainer container eb2fb3760fece18f1a92bd8d0de76d10c789a079a1f0ba86d5292c338d97bdef. Jan 14 13:07:47.934987 containerd[1722]: time="2025-01-14T13:07:47.934928007Z" level=info msg="StartContainer for \"eb2fb3760fece18f1a92bd8d0de76d10c789a079a1f0ba86d5292c338d97bdef\" returns successfully" Jan 14 13:07:48.333735 containerd[1722]: time="2025-01-14T13:07:48.333659237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:48.333944 containerd[1722]: time="2025-01-14T13:07:48.333796639Z" level=info msg="RemovePodSandbox \"8b86669f956bd4d8b19b2071013fdf1cc9e6857f194ff1e0134739104da99277\" returns successfully" Jan 14 13:07:48.334503 containerd[1722]: time="2025-01-14T13:07:48.334450346Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:48.334657 containerd[1722]: time="2025-01-14T13:07:48.334614248Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:48.334657 containerd[1722]: time="2025-01-14T13:07:48.334633948Z" level=info msg="StopPodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:48.335186 containerd[1722]: time="2025-01-14T13:07:48.335067753Z" level=info msg="RemovePodSandbox for \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:48.335186 containerd[1722]: time="2025-01-14T13:07:48.335106253Z" level=info msg="Forcibly stopping sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\"" Jan 14 13:07:48.335390 containerd[1722]: time="2025-01-14T13:07:48.335204354Z" level=info msg="TearDown network for sandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" successfully" Jan 14 13:07:48.577869 containerd[1722]: time="2025-01-14T13:07:48.577647048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:48.577869 containerd[1722]: time="2025-01-14T13:07:48.577772950Z" level=info msg="RemovePodSandbox \"93baabdeeab0468ed46864a3ed5e57b1a3c6dac7f57d1de728fe636c9cd947c4\" returns successfully" Jan 14 13:07:48.578359 containerd[1722]: time="2025-01-14T13:07:48.578328056Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:48.578514 containerd[1722]: time="2025-01-14T13:07:48.578466857Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:48.578514 containerd[1722]: time="2025-01-14T13:07:48.578496458Z" level=info msg="StopPodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:48.579043 containerd[1722]: time="2025-01-14T13:07:48.578985163Z" level=info msg="RemovePodSandbox for \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:48.579043 containerd[1722]: time="2025-01-14T13:07:48.579024264Z" level=info msg="Forcibly stopping sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\"" Jan 14 13:07:48.579244 containerd[1722]: time="2025-01-14T13:07:48.579196165Z" level=info msg="TearDown network for sandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" successfully" Jan 14 13:07:48.742767 containerd[1722]: time="2025-01-14T13:07:48.742616481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:48.742767 containerd[1722]: time="2025-01-14T13:07:48.742742583Z" level=info msg="RemovePodSandbox \"f819b1f8c380ba7c2eb85aa6764154c58aae650d3805bfca5cd24fe39b32abc3\" returns successfully" Jan 14 13:07:48.743350 containerd[1722]: time="2025-01-14T13:07:48.743323489Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:48.743522 containerd[1722]: time="2025-01-14T13:07:48.743480391Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:48.743522 containerd[1722]: time="2025-01-14T13:07:48.743507291Z" level=info msg="StopPodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:48.744052 containerd[1722]: time="2025-01-14T13:07:48.743910196Z" level=info msg="RemovePodSandbox for \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:48.744052 containerd[1722]: time="2025-01-14T13:07:48.743945896Z" level=info msg="Forcibly stopping sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\"" Jan 14 13:07:48.744221 containerd[1722]: time="2025-01-14T13:07:48.744050697Z" level=info msg="TearDown network for sandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" successfully" Jan 14 13:07:48.991673 containerd[1722]: time="2025-01-14T13:07:48.991519747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:48.991673 containerd[1722]: time="2025-01-14T13:07:48.991625148Z" level=info msg="RemovePodSandbox \"2321cb3c8eafe44ace38a165d50f454f50d2d0db2d509c7a2f50b45db29a3bba\" returns successfully" Jan 14 13:07:48.992374 containerd[1722]: time="2025-01-14T13:07:48.992220955Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:48.992511 containerd[1722]: time="2025-01-14T13:07:48.992384057Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:48.992511 containerd[1722]: time="2025-01-14T13:07:48.992404557Z" level=info msg="StopPodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:48.992957 containerd[1722]: time="2025-01-14T13:07:48.992841662Z" level=info msg="RemovePodSandbox for \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:48.992957 containerd[1722]: time="2025-01-14T13:07:48.992876262Z" level=info msg="Forcibly stopping sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\"" Jan 14 13:07:48.993073 containerd[1722]: time="2025-01-14T13:07:48.992972763Z" level=info msg="TearDown network for sandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" successfully" Jan 14 13:07:49.139078 containerd[1722]: time="2025-01-14T13:07:49.137907274Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:49.242918 containerd[1722]: time="2025-01-14T13:07:49.242839640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 14 13:07:49.291566 containerd[1722]: time="2025-01-14T13:07:49.290601370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:49.291566 containerd[1722]: time="2025-01-14T13:07:49.290731272Z" level=info msg="RemovePodSandbox \"e6a765b977abbf178973ba1e39ace9745668b90004a2700c33492ba5f01b5833\" returns successfully" Jan 14 13:07:49.292519 containerd[1722]: time="2025-01-14T13:07:49.292115787Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:49.292519 containerd[1722]: time="2025-01-14T13:07:49.292249589Z" level=info msg="TearDown network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" successfully" Jan 14 13:07:49.292519 containerd[1722]: time="2025-01-14T13:07:49.292268889Z" level=info msg="StopPodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" returns successfully" Jan 14 13:07:49.293361 containerd[1722]: time="2025-01-14T13:07:49.293200199Z" level=info msg="RemovePodSandbox for \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:49.293361 containerd[1722]: time="2025-01-14T13:07:49.293233100Z" level=info msg="Forcibly stopping sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\"" Jan 14 13:07:49.293817 containerd[1722]: time="2025-01-14T13:07:49.293538603Z" level=info msg="TearDown network for sandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" successfully" Jan 14 13:07:49.294597 containerd[1722]: time="2025-01-14T13:07:49.294414813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.303159992s" Jan 14 13:07:49.294597 containerd[1722]: time="2025-01-14T13:07:49.294448113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 14 13:07:49.295617 containerd[1722]: time="2025-01-14T13:07:49.295326423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 14 13:07:49.296872 containerd[1722]: time="2025-01-14T13:07:49.296846240Z" level=info msg="CreateContainer within sandbox \"14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 14 13:07:49.685726 containerd[1722]: time="2025-01-14T13:07:49.685635960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:49.685914 containerd[1722]: time="2025-01-14T13:07:49.685758661Z" level=info msg="RemovePodSandbox \"35b4d0fc2c20fc73a93cbdadf8d8aaf9f2f700559d64bd3caaa3f65a745883b3\" returns successfully" Jan 14 13:07:49.686545 containerd[1722]: time="2025-01-14T13:07:49.686453969Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" Jan 14 13:07:49.686758 containerd[1722]: time="2025-01-14T13:07:49.686593771Z" level=info msg="TearDown network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" successfully" Jan 14 13:07:49.686758 containerd[1722]: time="2025-01-14T13:07:49.686614671Z" level=info msg="StopPodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" returns successfully" Jan 14 13:07:49.687221 containerd[1722]: time="2025-01-14T13:07:49.687112276Z" level=info msg="RemovePodSandbox for \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" Jan 14 13:07:49.687221 containerd[1722]: time="2025-01-14T13:07:49.687172177Z" level=info msg="Forcibly stopping sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\"" Jan 14 13:07:49.687361 containerd[1722]: time="2025-01-14T13:07:49.687292678Z" level=info msg="TearDown network for sandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" successfully" Jan 14 13:07:50.078618 containerd[1722]: time="2025-01-14T13:07:50.078424825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:50.078618 containerd[1722]: time="2025-01-14T13:07:50.078580326Z" level=info msg="RemovePodSandbox \"a09fd5b576c092384cd2883a9a39b74fa525bf040cb147e32e8a66c3b5b7e154\" returns successfully" Jan 14 13:07:50.080637 containerd[1722]: time="2025-01-14T13:07:50.079602438Z" level=info msg="StopPodSandbox for \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\"" Jan 14 13:07:50.080637 containerd[1722]: time="2025-01-14T13:07:50.079948141Z" level=info msg="TearDown network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" successfully" Jan 14 13:07:50.080637 containerd[1722]: time="2025-01-14T13:07:50.079970742Z" level=info msg="StopPodSandbox for \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" returns successfully" Jan 14 13:07:50.082015 containerd[1722]: time="2025-01-14T13:07:50.081990164Z" level=info msg="RemovePodSandbox for \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\"" Jan 14 13:07:50.082663 containerd[1722]: time="2025-01-14T13:07:50.082615771Z" level=info msg="Forcibly stopping sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\"" Jan 14 13:07:50.082893 containerd[1722]: time="2025-01-14T13:07:50.082839874Z" level=info msg="TearDown network for sandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" successfully" Jan 14 13:07:50.976764 containerd[1722]: time="2025-01-14T13:07:50.976572671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:07:50.976764 containerd[1722]: time="2025-01-14T13:07:50.976670672Z" level=info msg="RemovePodSandbox \"4934644ba011730270bffdb801b9d8685b111206ad408a56fdb116a8bf6ed9d2\" returns successfully" Jan 14 13:07:53.697860 containerd[1722]: time="2025-01-14T13:07:53.697348106Z" level=info msg="CreateContainer within sandbox \"14b8956c04754aa5355932499d3cced3a4ab086874046434db6c25f802604486\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b64905b6fc3c9733d7a2a5450e5e53c7900d12ba8eb8770aa8ff68c47506c49\"" Jan 14 13:07:53.701129 containerd[1722]: time="2025-01-14T13:07:53.699877034Z" level=info msg="StartContainer for \"2b64905b6fc3c9733d7a2a5450e5e53c7900d12ba8eb8770aa8ff68c47506c49\"" Jan 14 13:07:53.925858 systemd[1]: Started cri-containerd-2b64905b6fc3c9733d7a2a5450e5e53c7900d12ba8eb8770aa8ff68c47506c49.scope - libcontainer container 2b64905b6fc3c9733d7a2a5450e5e53c7900d12ba8eb8770aa8ff68c47506c49. Jan 14 13:07:54.640482 containerd[1722]: time="2025-01-14T13:07:54.640316554Z" level=info msg="StartContainer for \"2b64905b6fc3c9733d7a2a5450e5e53c7900d12ba8eb8770aa8ff68c47506c49\" returns successfully" Jan 14 13:07:55.659039 kubelet[3426]: I0114 13:07:55.657501 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-qgrsw" podStartSLOduration=45.470274873 podStartE2EDuration="59.657451432s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:07:35.107559857 +0000 UTC m=+56.826350502" lastFinishedPulling="2025-01-14 13:07:49.294736316 +0000 UTC m=+71.013527061" observedRunningTime="2025-01-14 13:07:55.657150529 +0000 UTC m=+77.375941274" watchObservedRunningTime="2025-01-14 13:07:55.657451432 +0000 UTC m=+77.376242077" Jan 14 13:07:55.659039 kubelet[3426]: I0114 13:07:55.657633 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dcf9d67d5-mfw8m" podStartSLOduration=50.489457153 podStartE2EDuration="59.657604434s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:07:31.770451016 +0000 UTC m=+53.489241661" lastFinishedPulling="2025-01-14 13:07:40.938598297 +0000 UTC m=+62.657388942" observedRunningTime="2025-01-14 13:07:43.060407742 +0000 UTC m=+64.779198387" watchObservedRunningTime="2025-01-14 13:07:55.657604434 +0000 UTC m=+77.376395079" Jan 14 13:07:56.647917 kubelet[3426]: I0114 13:07:56.647874 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:07:58.742717 containerd[1722]: time="2025-01-14T13:07:58.742648240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:58.790072 containerd[1722]: time="2025-01-14T13:07:58.789971461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 14 13:07:58.841027 containerd[1722]: time="2025-01-14T13:07:58.840929423Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:58.887734 containerd[1722]: time="2025-01-14T13:07:58.887617638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:58.888872 containerd[1722]: time="2025-01-14T13:07:58.888666350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 9.593301826s" Jan 14 13:07:58.888872 containerd[1722]: time="2025-01-14T13:07:58.888731850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 14 13:07:58.890033 containerd[1722]: time="2025-01-14T13:07:58.889739461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 14 13:07:58.907084 containerd[1722]: time="2025-01-14T13:07:58.906702848Z" level=info msg="CreateContainer within sandbox \"c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 14 13:08:05.428316 containerd[1722]: time="2025-01-14T13:08:05.428267546Z" level=info msg="CreateContainer within sandbox \"c1c99942e9fe5ea2c6d4780667753ed00e2864ebc616575bc44cf99956d7a209\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d\"" Jan 14 13:08:05.430329 containerd[1722]: time="2025-01-14T13:08:05.429095056Z" level=info msg="StartContainer for \"b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d\"" Jan 14 13:08:05.464891 systemd[1]: Started cri-containerd-b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d.scope - libcontainer container b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d. Jan 14 13:08:05.580113 containerd[1722]: time="2025-01-14T13:08:05.580059720Z" level=info msg="StartContainer for \"b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d\" returns successfully" Jan 14 13:08:05.710284 systemd[1]: run-containerd-runc-k8s.io-b9eb3c0dd6946211311aff103b10fa2127dbd7dfedbbc341ce13254bf54f8e0d-runc.iZ48Pu.mount: Deactivated successfully. Jan 14 13:08:06.723923 kubelet[3426]: I0114 13:08:06.723869 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fddf9dc45-phpck" podStartSLOduration=47.27937962 podStartE2EDuration="1m10.723810551s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:07:35.445057828 +0000 UTC m=+57.163848473" lastFinishedPulling="2025-01-14 13:07:58.889488659 +0000 UTC m=+80.608279404" observedRunningTime="2025-01-14 13:08:05.700923252 +0000 UTC m=+87.419713897" watchObservedRunningTime="2025-01-14 13:08:06.723810551 +0000 UTC m=+88.442601196" Jan 14 13:08:07.831290 systemd[1]: run-containerd-runc-k8s.io-508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669-runc.JoSGUZ.mount: Deactivated successfully. Jan 14 13:08:07.935920 containerd[1722]: time="2025-01-14T13:08:07.935866457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:08:07.986039 containerd[1722]: time="2025-01-14T13:08:07.985924511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 14 13:08:08.031800 containerd[1722]: time="2025-01-14T13:08:08.031661916Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:08:08.093320 containerd[1722]: time="2025-01-14T13:08:08.093116396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:08:08.094720 containerd[1722]: time="2025-01-14T13:08:08.094042906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 9.204262044s" Jan 14 13:08:08.094720 containerd[1722]: time="2025-01-14T13:08:08.094086707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 14 13:08:08.096009 containerd[1722]: time="2025-01-14T13:08:08.095977628Z" level=info msg="CreateContainer within sandbox \"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 14 13:08:08.324022 kubelet[3426]: I0114 13:08:08.323100 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:08:08.390037 containerd[1722]: time="2025-01-14T13:08:08.389571275Z" level=info msg="CreateContainer within sandbox \"beba4b72e0e15dc8dc8211290df17d2634c94ccb173a92a6c6353e9355eec01b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"959ec57a03eda1427cbed01b3f554bf0e23b2d9937e594d0c7894e3db3e5b004\"" Jan 14 13:08:08.392255 containerd[1722]: time="2025-01-14T13:08:08.391363995Z" level=info msg="StartContainer for \"959ec57a03eda1427cbed01b3f554bf0e23b2d9937e594d0c7894e3db3e5b004\"" Jan 14 13:08:08.452867 systemd[1]: Started cri-containerd-959ec57a03eda1427cbed01b3f554bf0e23b2d9937e594d0c7894e3db3e5b004.scope - libcontainer container 959ec57a03eda1427cbed01b3f554bf0e23b2d9937e594d0c7894e3db3e5b004. Jan 14 13:08:08.486379 containerd[1722]: time="2025-01-14T13:08:08.486126843Z" level=info msg="StartContainer for \"959ec57a03eda1427cbed01b3f554bf0e23b2d9937e594d0c7894e3db3e5b004\" returns successfully" Jan 14 13:08:08.534067 kubelet[3426]: I0114 13:08:08.534030 3426 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 14 13:08:08.534249 kubelet[3426]: I0114 13:08:08.534109 3426 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 14 13:08:08.689919 kubelet[3426]: I0114 13:08:08.689869 3426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-z6l9p" podStartSLOduration=36.695398106 podStartE2EDuration="1m12.689816496s" podCreationTimestamp="2025-01-14 13:06:56 +0000 UTC" firstStartedPulling="2025-01-14 13:07:32.09996832 +0000 UTC m=+53.818758965" lastFinishedPulling="2025-01-14 13:08:08.09438671 +0000 UTC m=+89.813177355" observedRunningTime="2025-01-14 13:08:08.689086488 +0000 UTC m=+90.407877133" watchObservedRunningTime="2025-01-14 13:08:08.689816496 +0000 UTC m=+90.408607241" Jan 14 13:08:11.172763 kubelet[3426]: I0114 13:08:11.172396 3426 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:08:39.769082 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:53364.service - OpenSSH per-connection server daemon (10.200.16.10:53364). Jan 14 13:08:40.418135 sshd[6766]: Accepted publickey for core from 10.200.16.10 port 53364 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:08:40.417487 sshd-session[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:08:40.428960 systemd-logind[1704]: New session 10 of user core. Jan 14 13:08:40.434952 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:08:41.371403 sshd[6769]: Connection closed by 10.200.16.10 port 53364 Jan 14 13:08:41.373522 sshd-session[6766]: pam_unix(sshd:session): session closed for user core Jan 14 13:08:41.379823 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:53364.service: Deactivated successfully. Jan 14 13:08:41.384298 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:08:41.386163 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:08:41.387939 systemd-logind[1704]: Removed session 10. Jan 14 13:08:46.488990 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:52318.service - OpenSSH per-connection server daemon (10.200.16.10:52318). Jan 14 13:08:47.132572 sshd[6801]: Accepted publickey for core from 10.200.16.10 port 52318 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:08:47.133978 sshd-session[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:08:47.137936 systemd-logind[1704]: New session 11 of user core. Jan 14 13:08:47.146849 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:08:47.648663 sshd[6803]: Connection closed by 10.200.16.10 port 52318 Jan 14 13:08:47.649529 sshd-session[6801]: pam_unix(sshd:session): session closed for user core Jan 14 13:08:47.653102 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:52318.service: Deactivated successfully. Jan 14 13:08:47.656039 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:08:47.658202 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:08:47.659262 systemd-logind[1704]: Removed session 11. Jan 14 13:08:52.766051 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:52326.service - OpenSSH per-connection server daemon (10.200.16.10:52326). Jan 14 13:08:53.427830 sshd[6817]: Accepted publickey for core from 10.200.16.10 port 52326 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:08:53.429251 sshd-session[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:08:53.434057 systemd-logind[1704]: New session 12 of user core. Jan 14 13:08:53.438862 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:08:54.048706 sshd[6819]: Connection closed by 10.200.16.10 port 52326 Jan 14 13:08:54.049679 sshd-session[6817]: pam_unix(sshd:session): session closed for user core Jan 14 13:08:54.053476 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:52326.service: Deactivated successfully. Jan 14 13:08:54.055553 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:08:54.056401 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:08:54.057891 systemd-logind[1704]: Removed session 12. Jan 14 13:08:54.164988 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:52340.service - OpenSSH per-connection server daemon (10.200.16.10:52340). Jan 14 13:08:54.821298 sshd[6831]: Accepted publickey for core from 10.200.16.10 port 52340 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:08:54.822811 sshd-session[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:08:54.827531 systemd-logind[1704]: New session 13 of user core. Jan 14 13:08:54.831848 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:08:55.373079 sshd[6839]: Connection closed by 10.200.16.10 port 52340 Jan 14 13:08:55.373782 sshd-session[6831]: pam_unix(sshd:session): session closed for user core Jan 14 13:08:55.376533 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:52340.service: Deactivated successfully. Jan 14 13:08:55.378938 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:08:55.380786 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:08:55.382136 systemd-logind[1704]: Removed session 13. Jan 14 13:08:55.504422 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:52354.service - OpenSSH per-connection server daemon (10.200.16.10:52354). Jan 14 13:08:56.316895 sshd[6850]: Accepted publickey for core from 10.200.16.10 port 52354 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:08:56.318413 sshd-session[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:08:56.324196 systemd-logind[1704]: New session 14 of user core. Jan 14 13:08:56.329856 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:08:57.024002 sshd[6852]: Connection closed by 10.200.16.10 port 52354 Jan 14 13:08:57.024901 sshd-session[6850]: pam_unix(sshd:session): session closed for user core Jan 14 13:08:57.028932 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:52354.service: Deactivated successfully. Jan 14 13:08:57.031087 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:08:57.031887 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:08:57.033045 systemd-logind[1704]: Removed session 14. Jan 14 13:09:02.105224 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:39682.service - OpenSSH per-connection server daemon (10.200.16.10:39682). Jan 14 13:09:02.767249 sshd[6866]: Accepted publickey for core from 10.200.16.10 port 39682 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:02.768854 sshd-session[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:02.773203 systemd-logind[1704]: New session 15 of user core. Jan 14 13:09:02.779906 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:09:03.296536 sshd[6868]: Connection closed by 10.200.16.10 port 39682 Jan 14 13:09:03.297425 sshd-session[6866]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:03.302254 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:39682.service: Deactivated successfully. Jan 14 13:09:03.305230 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:09:03.306136 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:09:03.307279 systemd-logind[1704]: Removed session 15. Jan 14 13:09:07.835856 systemd[1]: run-containerd-runc-k8s.io-508a03f55f7bdc4bc37e1df617c703508b9c29ca6bdfb3e2a3bccb10f1252669-runc.cL9xO6.mount: Deactivated successfully. Jan 14 13:09:08.414012 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:40802.service - OpenSSH per-connection server daemon (10.200.16.10:40802). Jan 14 13:09:09.061412 sshd[6934]: Accepted publickey for core from 10.200.16.10 port 40802 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:09.063107 sshd-session[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:09.068224 systemd-logind[1704]: New session 16 of user core. Jan 14 13:09:09.073897 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:09:09.575082 sshd[6936]: Connection closed by 10.200.16.10 port 40802 Jan 14 13:09:09.575912 sshd-session[6934]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:09.578971 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:40802.service: Deactivated successfully. Jan 14 13:09:09.581373 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:09:09.583074 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:09:09.584222 systemd-logind[1704]: Removed session 16. Jan 14 13:09:14.694053 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:40810.service - OpenSSH per-connection server daemon (10.200.16.10:40810). Jan 14 13:09:15.462119 sshd[6968]: Accepted publickey for core from 10.200.16.10 port 40810 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:15.463220 sshd-session[6968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:15.470496 systemd-logind[1704]: New session 17 of user core. Jan 14 13:09:15.474896 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:09:15.981595 sshd[6970]: Connection closed by 10.200.16.10 port 40810 Jan 14 13:09:15.982500 sshd-session[6968]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:15.986140 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:40810.service: Deactivated successfully. Jan 14 13:09:15.988275 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:09:15.989196 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:09:15.990337 systemd-logind[1704]: Removed session 17. Jan 14 13:09:16.102375 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:37790.service - OpenSSH per-connection server daemon (10.200.16.10:37790). Jan 14 13:09:16.763451 sshd[6981]: Accepted publickey for core from 10.200.16.10 port 37790 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:16.764959 sshd-session[6981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:16.769646 systemd-logind[1704]: New session 18 of user core. Jan 14 13:09:16.773858 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:09:17.355847 sshd[6983]: Connection closed by 10.200.16.10 port 37790 Jan 14 13:09:17.356801 sshd-session[6981]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:17.360149 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:37790.service: Deactivated successfully. Jan 14 13:09:17.362769 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:09:17.364893 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:09:17.365934 systemd-logind[1704]: Removed session 18. Jan 14 13:09:17.473007 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:37802.service - OpenSSH per-connection server daemon (10.200.16.10:37802). Jan 14 13:09:18.113863 sshd[6991]: Accepted publickey for core from 10.200.16.10 port 37802 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:18.115316 sshd-session[6991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:18.120044 systemd-logind[1704]: New session 19 of user core. Jan 14 13:09:18.124884 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:09:20.531383 sshd[6993]: Connection closed by 10.200.16.10 port 37802 Jan 14 13:09:20.531927 sshd-session[6991]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:20.535270 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:37802.service: Deactivated successfully. Jan 14 13:09:20.537643 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:09:20.539568 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:09:20.540651 systemd-logind[1704]: Removed session 19. Jan 14 13:09:20.658380 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:37818.service - OpenSSH per-connection server daemon (10.200.16.10:37818). Jan 14 13:09:21.316762 sshd[7012]: Accepted publickey for core from 10.200.16.10 port 37818 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:21.318520 sshd-session[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:21.323600 systemd-logind[1704]: New session 20 of user core. Jan 14 13:09:21.328883 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:09:21.949495 sshd[7014]: Connection closed by 10.200.16.10 port 37818 Jan 14 13:09:21.950363 sshd-session[7012]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:21.953750 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:37818.service: Deactivated successfully. Jan 14 13:09:21.956570 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:09:21.959386 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:09:21.960673 systemd-logind[1704]: Removed session 20. Jan 14 13:09:22.069043 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:37822.service - OpenSSH per-connection server daemon (10.200.16.10:37822). Jan 14 13:09:22.719764 sshd[7023]: Accepted publickey for core from 10.200.16.10 port 37822 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:22.721315 sshd-session[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:22.726180 systemd-logind[1704]: New session 21 of user core. Jan 14 13:09:22.729878 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:09:23.238437 sshd[7025]: Connection closed by 10.200.16.10 port 37822 Jan 14 13:09:23.239335 sshd-session[7023]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:23.244013 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:37822.service: Deactivated successfully. Jan 14 13:09:23.246860 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:09:23.247915 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:09:23.248918 systemd-logind[1704]: Removed session 21. Jan 14 13:09:28.367041 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:47222.service - OpenSSH per-connection server daemon (10.200.16.10:47222). Jan 14 13:09:29.022535 sshd[7039]: Accepted publickey for core from 10.200.16.10 port 47222 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:29.024327 sshd-session[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:29.029588 systemd-logind[1704]: New session 22 of user core. Jan 14 13:09:29.033856 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:09:29.541190 sshd[7041]: Connection closed by 10.200.16.10 port 47222 Jan 14 13:09:29.542077 sshd-session[7039]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:29.545626 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:47222.service: Deactivated successfully. Jan 14 13:09:29.548222 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:09:29.549952 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:09:29.551013 systemd-logind[1704]: Removed session 22. Jan 14 13:09:34.664053 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:47228.service - OpenSSH per-connection server daemon (10.200.16.10:47228). Jan 14 13:09:35.313085 sshd[7052]: Accepted publickey for core from 10.200.16.10 port 47228 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:35.314571 sshd-session[7052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:35.318636 systemd-logind[1704]: New session 23 of user core. Jan 14 13:09:35.322868 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:09:35.830657 sshd[7054]: Connection closed by 10.200.16.10 port 47228 Jan 14 13:09:35.831499 sshd-session[7052]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:35.836201 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:47228.service: Deactivated successfully. Jan 14 13:09:35.838430 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:09:35.839247 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:09:35.840334 systemd-logind[1704]: Removed session 23. Jan 14 13:09:40.947020 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:59674.service - OpenSSH per-connection server daemon (10.200.16.10:59674). Jan 14 13:09:41.588833 sshd[7089]: Accepted publickey for core from 10.200.16.10 port 59674 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:41.590328 sshd-session[7089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:41.594448 systemd-logind[1704]: New session 24 of user core. Jan 14 13:09:41.600862 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:09:42.097137 sshd[7110]: Connection closed by 10.200.16.10 port 59674 Jan 14 13:09:42.097905 sshd-session[7089]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:42.101636 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:59674.service: Deactivated successfully. Jan 14 13:09:42.104176 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:09:42.105187 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:09:42.106215 systemd-logind[1704]: Removed session 24. Jan 14 13:09:47.216010 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:49620.service - OpenSSH per-connection server daemon (10.200.16.10:49620). Jan 14 13:09:47.858305 sshd[7120]: Accepted publickey for core from 10.200.16.10 port 49620 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:47.859940 sshd-session[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:47.864284 systemd-logind[1704]: New session 25 of user core. Jan 14 13:09:47.871866 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:09:48.369160 sshd[7122]: Connection closed by 10.200.16.10 port 49620 Jan 14 13:09:48.370024 sshd-session[7120]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:48.373586 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:49620.service: Deactivated successfully. Jan 14 13:09:48.375881 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:09:48.377408 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:09:48.378735 systemd-logind[1704]: Removed session 25. Jan 14 13:09:53.487030 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:49624.service - OpenSSH per-connection server daemon (10.200.16.10:49624). Jan 14 13:09:54.130783 sshd[7135]: Accepted publickey for core from 10.200.16.10 port 49624 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:54.133090 sshd-session[7135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:54.138748 systemd-logind[1704]: New session 26 of user core. Jan 14 13:09:54.147847 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:09:54.644988 sshd[7137]: Connection closed by 10.200.16.10 port 49624 Jan 14 13:09:54.645835 sshd-session[7135]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:54.648830 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:49624.service: Deactivated successfully. Jan 14 13:09:54.651331 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:09:54.653024 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:09:54.654201 systemd-logind[1704]: Removed session 26.