Mar 19 11:46:41.158916 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 11:46:41.158954 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:41.158968 kernel: BIOS-provided physical RAM map: Mar 19 11:46:41.158980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 19 11:46:41.158990 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 19 11:46:41.159000 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 19 11:46:41.159013 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 19 11:46:41.159024 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 19 11:46:41.159039 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 19 11:46:41.159050 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 19 11:46:41.159061 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 19 11:46:41.159072 kernel: printk: bootconsole [earlyser0] enabled Mar 19 11:46:41.159083 kernel: NX (Execute Disable) protection: active Mar 19 11:46:41.159094 kernel: APIC: Static calls initialized Mar 19 11:46:41.159111 kernel: efi: EFI v2.7 by Microsoft Mar 19 11:46:41.159124 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Mar 19 11:46:41.159136 kernel: random: crng init done Mar 19 11:46:41.159148 kernel: secureboot: Secure boot disabled Mar 19 11:46:41.159160 kernel: SMBIOS 3.1.0 present. Mar 19 11:46:41.159172 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 19 11:46:41.159184 kernel: Hypervisor detected: Microsoft Hyper-V Mar 19 11:46:41.159197 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 19 11:46:41.159209 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Mar 19 11:46:41.159221 kernel: Hyper-V: Nested features: 0x1e0101 Mar 19 11:46:41.159236 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 19 11:46:41.159247 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 19 11:46:41.159260 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 19 11:46:41.159272 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 19 11:46:41.159285 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 19 11:46:41.159298 kernel: tsc: Detected 2593.907 MHz processor Mar 19 11:46:41.159310 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 11:46:41.159322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 11:46:41.159334 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 19 11:46:41.159350 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 19 11:46:41.159362 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 11:46:41.159374 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 19 11:46:41.159407 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 19 11:46:41.159420 kernel: Using GB pages for direct mapping Mar 19 11:46:41.159432 kernel: ACPI: Early table checksum verification disabled Mar 19 11:46:41.159445 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 19 11:46:41.159463 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159479 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159492 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 19 11:46:41.159505 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 19 11:46:41.159519 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159532 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159545 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159561 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159574 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159588 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159601 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:46:41.159614 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 19 11:46:41.159627 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 19 11:46:41.159640 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 19 11:46:41.159653 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 19 11:46:41.159666 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 19 11:46:41.159682 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 19 11:46:41.159695 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 19 11:46:41.159708 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 19 11:46:41.159721 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 19 11:46:41.159735 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 19 11:46:41.159748 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 19 11:46:41.159761 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 19 11:46:41.159774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 19 11:46:41.159787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 19 11:46:41.159803 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 19 11:46:41.159816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 19 11:46:41.159829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 19 11:46:41.159843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 19 11:46:41.159856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 19 11:46:41.159869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 19 11:46:41.159882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 19 11:46:41.159895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 19 11:46:41.159911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 19 11:46:41.159924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 19 11:46:41.159937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 19 11:46:41.159950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 19 11:46:41.159963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 19 11:46:41.159976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 19 11:46:41.159990 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 19 11:46:41.160003 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 19 11:46:41.160016 kernel: Zone ranges: Mar 19 11:46:41.160031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 11:46:41.160045 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 19 11:46:41.160058 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 19 11:46:41.160071 kernel: Movable zone start for each node Mar 19 11:46:41.160084 kernel: Early memory node ranges Mar 19 11:46:41.160097 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 19 11:46:41.160110 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 19 11:46:41.160123 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 19 11:46:41.160135 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 19 11:46:41.160151 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 19 11:46:41.160165 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 11:46:41.160178 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 19 11:46:41.160191 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 19 11:46:41.160204 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 19 11:46:41.160217 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 19 11:46:41.160230 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 19 11:46:41.160243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 19 11:46:41.160256 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 11:46:41.160272 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 19 11:46:41.160285 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 19 11:46:41.160298 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 19 11:46:41.160311 kernel: Booting paravirtualized kernel on Hyper-V Mar 19 11:46:41.160325 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 11:46:41.160338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 19 11:46:41.160351 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 19 11:46:41.160365 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 19 11:46:41.160395 kernel: pcpu-alloc: [0] 0 1 Mar 19 11:46:41.160412 kernel: Hyper-V: PV spinlocks enabled Mar 19 11:46:41.160426 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 19 11:46:41.160441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:41.160454 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:46:41.160467 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 19 11:46:41.160480 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:46:41.160493 kernel: Fallback order for Node 0: 0 Mar 19 11:46:41.160507 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 19 11:46:41.160523 kernel: Policy zone: Normal Mar 19 11:46:41.160546 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:46:41.160560 kernel: software IO TLB: area num 2. Mar 19 11:46:41.160578 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 312164K reserved, 0K cma-reserved) Mar 19 11:46:41.160592 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:46:41.160606 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 11:46:41.160620 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 11:46:41.160633 kernel: Dynamic Preempt: voluntary Mar 19 11:46:41.160647 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:46:41.160666 kernel: rcu: RCU event tracing is enabled. Mar 19 11:46:41.160681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:46:41.160698 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:46:41.160712 kernel: Rude variant of Tasks RCU enabled. Mar 19 11:46:41.160726 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:46:41.160740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:46:41.160754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:46:41.160768 kernel: Using NULL legacy PIC Mar 19 11:46:41.160785 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 19 11:46:41.160799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:46:41.160813 kernel: Console: colour dummy device 80x25 Mar 19 11:46:41.160826 kernel: printk: console [tty1] enabled Mar 19 11:46:41.160841 kernel: printk: console [ttyS0] enabled Mar 19 11:46:41.160855 kernel: printk: bootconsole [earlyser0] disabled Mar 19 11:46:41.160869 kernel: ACPI: Core revision 20230628 Mar 19 11:46:41.160882 kernel: Failed to register legacy timer interrupt Mar 19 11:46:41.160896 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 11:46:41.160913 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 19 11:46:41.160927 kernel: Hyper-V: Using IPI hypercalls Mar 19 11:46:41.160940 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 19 11:46:41.160954 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 19 11:46:41.160968 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 19 11:46:41.160982 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 19 11:46:41.160996 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 19 11:46:41.161010 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 19 11:46:41.161025 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 19 11:46:41.161042 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 19 11:46:41.161056 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 19 11:46:41.161071 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 11:46:41.161084 kernel: Spectre V2 : Mitigation: Retpolines Mar 19 11:46:41.161098 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 11:46:41.161111 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 19 11:46:41.161126 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 19 11:46:41.161139 kernel: RETBleed: Vulnerable Mar 19 11:46:41.161153 kernel: Speculative Store Bypass: Vulnerable Mar 19 11:46:41.161167 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 19 11:46:41.161183 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 19 11:46:41.161197 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 11:46:41.161210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 11:46:41.161224 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 11:46:41.161238 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 19 11:46:41.161252 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 19 11:46:41.161266 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 19 11:46:41.161279 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 11:46:41.161291 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 19 11:46:41.161302 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 19 11:46:41.161313 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 19 11:46:41.161329 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 19 11:46:41.161341 kernel: Freeing SMP alternatives memory: 32K Mar 19 11:46:41.161354 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:46:41.161366 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:46:41.161388 kernel: landlock: Up and running. Mar 19 11:46:41.161406 kernel: SELinux: Initializing. Mar 19 11:46:41.161416 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 11:46:41.161424 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 11:46:41.161433 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 19 11:46:41.161441 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:46:41.161450 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:46:41.161466 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:46:41.161479 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 19 11:46:41.161491 kernel: signal: max sigframe size: 3632 Mar 19 11:46:41.161503 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:46:41.161515 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:46:41.161529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 19 11:46:41.161543 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:46:41.161556 kernel: smpboot: x86: Booting SMP configuration: Mar 19 11:46:41.161569 kernel: .... node #0, CPUs: #1 Mar 19 11:46:41.161587 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 19 11:46:41.161603 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 19 11:46:41.161617 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:46:41.161632 kernel: smpboot: Max logical packages: 1 Mar 19 11:46:41.161646 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 19 11:46:41.161661 kernel: devtmpfs: initialized Mar 19 11:46:41.161676 kernel: x86/mm: Memory block size: 128MB Mar 19 11:46:41.161692 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 19 11:46:41.161710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:46:41.161723 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:46:41.161737 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:46:41.161752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:46:41.161767 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:46:41.161782 kernel: audit: type=2000 audit(1742384800.029:1): state=initialized audit_enabled=0 res=1 Mar 19 11:46:41.161797 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:46:41.161811 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 11:46:41.161827 kernel: cpuidle: using governor menu Mar 19 11:46:41.161845 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:46:41.161860 kernel: dca service started, version 1.12.1 Mar 19 11:46:41.161875 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Mar 19 11:46:41.161889 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 11:46:41.161902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:46:41.161914 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:46:41.161926 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:46:41.161941 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:46:41.161953 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:46:41.161969 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:46:41.161982 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:46:41.161996 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:46:41.162008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:46:41.162020 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 11:46:41.162032 kernel: ACPI: Interpreter enabled Mar 19 11:46:41.162045 kernel: ACPI: PM: (supports S0 S5) Mar 19 11:46:41.162059 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 11:46:41.162073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 11:46:41.162091 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 19 11:46:41.162105 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 19 11:46:41.162119 kernel: iommu: Default domain type: Translated Mar 19 11:46:41.162133 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 11:46:41.162147 kernel: efivars: Registered efivars operations Mar 19 11:46:41.162160 kernel: PCI: Using ACPI for IRQ routing Mar 19 11:46:41.162174 kernel: PCI: System does not support PCI Mar 19 11:46:41.162187 kernel: vgaarb: loaded Mar 19 11:46:41.162201 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 19 11:46:41.162217 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:46:41.162230 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:46:41.162244 kernel: pnp: PnP ACPI init Mar 19 11:46:41.162258 kernel: pnp: PnP ACPI: found 3 devices Mar 19 11:46:41.162272 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 11:46:41.162287 kernel: NET: Registered PF_INET protocol family Mar 19 11:46:41.162300 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 19 11:46:41.162315 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 19 11:46:41.162329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:46:41.162345 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:46:41.162359 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 19 11:46:41.162373 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 19 11:46:41.162521 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 19 11:46:41.162530 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 19 11:46:41.162540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:46:41.162550 kernel: NET: Registered PF_XDP protocol family Mar 19 11:46:41.162558 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:46:41.162569 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 19 11:46:41.162583 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Mar 19 11:46:41.162593 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 19 11:46:41.162601 kernel: Initialise system trusted keyrings Mar 19 11:46:41.162611 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 19 11:46:41.162619 kernel: Key type asymmetric registered Mar 19 11:46:41.162630 kernel: Asymmetric key parser 'x509' registered Mar 19 11:46:41.162639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 11:46:41.162647 kernel: io scheduler mq-deadline registered Mar 19 11:46:41.162658 kernel: io scheduler kyber registered Mar 19 11:46:41.162668 kernel: io scheduler bfq registered Mar 19 11:46:41.162680 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 11:46:41.162688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:46:41.162698 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 11:46:41.162709 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 19 11:46:41.162718 kernel: i8042: PNP: No PS/2 controller found. Mar 19 11:46:41.162880 kernel: rtc_cmos 00:02: registered as rtc0 Mar 19 11:46:41.162970 kernel: rtc_cmos 00:02: setting system clock to 2025-03-19T11:46:40 UTC (1742384800) Mar 19 11:46:41.163078 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 19 11:46:41.163091 kernel: intel_pstate: CPU model not supported Mar 19 11:46:41.163100 kernel: efifb: probing for efifb Mar 19 11:46:41.163110 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 19 11:46:41.163119 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 19 11:46:41.163130 kernel: efifb: scrolling: redraw Mar 19 11:46:41.163138 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 19 11:46:41.163146 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:46:41.163158 kernel: fb0: EFI VGA frame buffer device Mar 19 11:46:41.163169 kernel: pstore: Using crash dump compression: deflate Mar 19 11:46:41.163179 kernel: pstore: Registered efi_pstore as persistent store backend Mar 19 11:46:41.163189 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:46:41.163197 kernel: Segment Routing with IPv6 Mar 19 11:46:41.163208 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:46:41.163216 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:46:41.163227 kernel: Key type dns_resolver registered Mar 19 11:46:41.163235 kernel: IPI shorthand broadcast: enabled Mar 19 11:46:41.163244 kernel: sched_clock: Marking stable (861002800, 54656900)->(1224857100, -309197400) Mar 19 11:46:41.163257 kernel: registered taskstats version 1 Mar 19 11:46:41.163266 kernel: Loading compiled-in X.509 certificates Mar 19 11:46:41.163277 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 11:46:41.163285 kernel: Key type .fscrypt registered Mar 19 11:46:41.163293 kernel: Key type fscrypt-provisioning registered Mar 19 11:46:41.163304 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:46:41.163312 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:46:41.163320 kernel: ima: No architecture policies found Mar 19 11:46:41.163328 kernel: clk: Disabling unused clocks Mar 19 11:46:41.163339 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 11:46:41.163347 kernel: Write protecting the kernel read-only data: 38912k Mar 19 11:46:41.163355 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 11:46:41.163364 kernel: Run /init as init process Mar 19 11:46:41.163372 kernel: with arguments: Mar 19 11:46:41.163392 kernel: /init Mar 19 11:46:41.163400 kernel: with environment: Mar 19 11:46:41.163408 kernel: HOME=/ Mar 19 11:46:41.163415 kernel: TERM=linux Mar 19 11:46:41.163430 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:46:41.163439 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:46:41.163454 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:46:41.163463 systemd[1]: Detected virtualization microsoft. Mar 19 11:46:41.163477 systemd[1]: Detected architecture x86-64. Mar 19 11:46:41.163486 systemd[1]: Running in initrd. Mar 19 11:46:41.163498 systemd[1]: No hostname configured, using default hostname. Mar 19 11:46:41.163509 systemd[1]: Hostname set to . Mar 19 11:46:41.163521 systemd[1]: Initializing machine ID from random generator. Mar 19 11:46:41.163529 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:46:41.163546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:41.163558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:41.163569 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:46:41.163580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:46:41.163592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:46:41.163606 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:46:41.163619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:46:41.163628 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:46:41.163640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:41.163649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:41.163660 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:46:41.163669 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:46:41.163680 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:46:41.163692 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:46:41.163700 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:46:41.163711 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:46:41.163721 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:46:41.163731 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:46:41.163742 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:41.163751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:41.163763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:41.163775 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:46:41.163786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:46:41.163795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:46:41.163807 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:46:41.163815 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:46:41.163827 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:46:41.163836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:46:41.163847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:41.163880 systemd-journald[177]: Collecting audit messages is disabled. Mar 19 11:46:41.163906 systemd-journald[177]: Journal started Mar 19 11:46:41.163938 systemd-journald[177]: Runtime Journal (/run/log/journal/c45d239c65df4323b3d13210fcaa1d54) is 8M, max 158.8M, 150.8M free. Mar 19 11:46:41.172396 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:46:41.176257 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:46:41.176517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:41.176894 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:46:41.191937 systemd-modules-load[179]: Inserted module 'overlay' Mar 19 11:46:41.196577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:46:41.199239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:46:41.224025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:46:41.247034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:46:41.246584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:46:41.255785 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 19 11:46:41.259033 kernel: Bridge firewalling registered Mar 19 11:46:41.256652 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:41.261877 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:41.267701 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:41.273920 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:41.287547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:41.297548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:46:41.303714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:41.315711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:46:41.318489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:41.328324 dracut-cmdline[211]: dracut-dracut-053 Mar 19 11:46:41.333506 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:41.333571 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:46:41.399531 systemd-resolved[221]: Positive Trust Anchors: Mar 19 11:46:41.399547 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:46:41.399607 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:46:41.426135 systemd-resolved[221]: Defaulting to hostname 'linux'. Mar 19 11:46:41.427502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:46:41.430848 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:41.445399 kernel: SCSI subsystem initialized Mar 19 11:46:41.455405 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:46:41.467403 kernel: iscsi: registered transport (tcp) Mar 19 11:46:41.488564 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:46:41.488665 kernel: QLogic iSCSI HBA Driver Mar 19 11:46:41.524937 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:46:41.537566 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:46:41.566410 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:46:41.566488 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:46:41.568396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:46:41.611419 kernel: raid6: avx512x4 gen() 18191 MB/s Mar 19 11:46:41.630403 kernel: raid6: avx512x2 gen() 17897 MB/s Mar 19 11:46:41.649395 kernel: raid6: avx512x1 gen() 18093 MB/s Mar 19 11:46:41.668398 kernel: raid6: avx2x4 gen() 18098 MB/s Mar 19 11:46:41.687396 kernel: raid6: avx2x2 gen() 18064 MB/s Mar 19 11:46:41.707445 kernel: raid6: avx2x1 gen() 13527 MB/s Mar 19 11:46:41.707494 kernel: raid6: using algorithm avx512x4 gen() 18191 MB/s Mar 19 11:46:41.728453 kernel: raid6: .... xor() 6550 MB/s, rmw enabled Mar 19 11:46:41.728491 kernel: raid6: using avx512x2 recovery algorithm Mar 19 11:46:41.751405 kernel: xor: automatically using best checksumming function avx Mar 19 11:46:41.893411 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:46:41.902839 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:46:41.914601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:41.933204 systemd-udevd[396]: Using default interface naming scheme 'v255'. Mar 19 11:46:41.938402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:41.950661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:46:41.977767 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 19 11:46:42.008133 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:46:42.021576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:46:42.066146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:42.080566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:46:42.113770 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:46:42.124613 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:46:42.133666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:42.155139 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 11:46:42.138774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:46:42.158708 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:46:42.183544 kernel: hv_vmbus: Vmbus version:5.2 Mar 19 11:46:42.184217 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:46:42.207341 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 19 11:46:42.207434 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 19 11:46:42.207306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:46:42.213400 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 19 11:46:42.221409 kernel: PTP clock support registered Mar 19 11:46:42.221793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:42.228426 kernel: AVX2 version of gcm_enc/dec engaged. Mar 19 11:46:42.231282 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:42.873397 kernel: hv_utils: Registering HyperV Utility Driver Mar 19 11:46:42.873431 kernel: hv_vmbus: registering driver hv_utils Mar 19 11:46:42.873448 kernel: AES CTR mode by8 optimization enabled Mar 19 11:46:42.873473 kernel: hv_utils: Heartbeat IC version 3.0 Mar 19 11:46:42.873491 kernel: hv_utils: Shutdown IC version 3.2 Mar 19 11:46:42.873508 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 19 11:46:42.873525 kernel: hv_utils: TimeSync IC version 4.0 Mar 19 11:46:42.245452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:42.245655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:42.869391 systemd-resolved[221]: Clock change detected. Flushing caches. Mar 19 11:46:42.870154 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:42.892103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:42.895825 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:42.908361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:42.909043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:42.921966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:42.931048 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:46:42.931079 kernel: hv_vmbus: registering driver hv_netvsc Mar 19 11:46:42.942437 kernel: hv_vmbus: registering driver hid_hyperv Mar 19 11:46:42.942491 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 19 11:46:42.950678 kernel: hv_vmbus: registering driver hv_storvsc Mar 19 11:46:42.950742 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 19 11:46:42.955832 kernel: scsi host1: storvsc_host_t Mar 19 11:46:42.962787 kernel: scsi host0: storvsc_host_t Mar 19 11:46:42.967083 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 19 11:46:42.968848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:42.980024 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 19 11:46:42.987510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:43.011392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:43.017033 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 19 11:46:43.017351 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:46:43.017367 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 19 11:46:43.034662 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 19 11:46:43.047957 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 19 11:46:43.048158 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 11:46:43.048326 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 19 11:46:43.048503 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 19 11:46:43.048685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:46:43.048706 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 11:46:43.076091 kernel: hv_netvsc 7c1e5276-add7-7c1e-5276-add77c1e5276 eth0: VF slot 1 added Mar 19 11:46:43.087789 kernel: hv_vmbus: registering driver hv_pci Mar 19 11:46:43.093447 kernel: hv_pci 36a2952c-0673-4ff2-8560-5ecd02c0b705: PCI VMBus probing: Using version 0x10004 Mar 19 11:46:43.136612 kernel: hv_pci 36a2952c-0673-4ff2-8560-5ecd02c0b705: PCI host bridge to bus 0673:00 Mar 19 11:46:43.136851 kernel: pci_bus 0673:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 19 11:46:43.137040 kernel: pci_bus 0673:00: No busn resource found for root bus, will use [bus 00-ff] Mar 19 11:46:43.137200 kernel: pci 0673:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 19 11:46:43.137408 kernel: pci 0673:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 19 11:46:43.137581 kernel: pci 0673:00:02.0: enabling Extended Tags Mar 19 11:46:43.137899 kernel: pci 0673:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0673:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 19 11:46:43.138129 kernel: pci_bus 0673:00: busn_res: [bus 00-ff] end is updated to 00 Mar 19 11:46:43.138287 kernel: pci 0673:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 19 11:46:43.254785 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (462) Mar 19 11:46:43.260924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 19 11:46:43.290789 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Mar 19 11:46:43.333485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 19 11:46:43.383312 kernel: mlx5_core 0673:00:02.0: enabling device (0000 -> 0002) Mar 19 11:46:43.629553 kernel: mlx5_core 0673:00:02.0: firmware version: 14.30.5000 Mar 19 11:46:43.629797 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:46:43.629820 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:46:43.629836 kernel: hv_netvsc 7c1e5276-add7-7c1e-5276-add77c1e5276 eth0: VF registering: eth1 Mar 19 11:46:43.629999 kernel: mlx5_core 0673:00:02.0 eth1: joined to eth0 Mar 19 11:46:43.630185 kernel: mlx5_core 0673:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 19 11:46:43.399937 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 19 11:46:43.403507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 19 11:46:43.438611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:46:43.449994 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:46:43.652789 kernel: mlx5_core 0673:00:02.0 enP1651s1: renamed from eth1 Mar 19 11:46:44.489472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:46:44.491656 disk-uuid[599]: The operation has completed successfully. Mar 19 11:46:44.598541 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:46:44.598660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:46:44.647950 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:46:44.660133 sh[692]: Success Mar 19 11:46:44.682842 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 19 11:46:44.777735 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:46:44.795033 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:46:44.806344 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:46:44.836794 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 11:46:44.836855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:44.842985 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:46:44.846189 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:46:44.849028 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:46:44.925977 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:46:44.932614 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:46:44.944003 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:46:44.947933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:46:44.981990 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:44.982070 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:44.982090 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:46:44.990786 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:46:45.000791 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:46:45.006541 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:45.014530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:46:45.027029 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:46:45.055984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:46:45.070702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:46:45.101629 systemd-networkd[877]: lo: Link UP Mar 19 11:46:45.101640 systemd-networkd[877]: lo: Gained carrier Mar 19 11:46:45.103979 systemd-networkd[877]: Enumeration completed Mar 19 11:46:45.104306 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:46:45.107991 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:45.107996 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:46:45.111548 systemd[1]: Reached target network.target - Network. Mar 19 11:46:45.181781 kernel: mlx5_core 0673:00:02.0 enP1651s1: Link up Mar 19 11:46:45.223727 kernel: hv_netvsc 7c1e5276-add7-7c1e-5276-add77c1e5276 eth0: Data path switched to VF: enP1651s1 Mar 19 11:46:45.223305 systemd-networkd[877]: enP1651s1: Link UP Mar 19 11:46:45.223421 systemd-networkd[877]: eth0: Link UP Mar 19 11:46:45.223624 systemd-networkd[877]: eth0: Gained carrier Mar 19 11:46:45.223638 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:45.237229 systemd-networkd[877]: enP1651s1: Gained carrier Mar 19 11:46:45.265844 systemd-networkd[877]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 19 11:46:45.333950 ignition[831]: Ignition 2.20.0 Mar 19 11:46:45.333963 ignition[831]: Stage: fetch-offline Mar 19 11:46:45.336089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:46:45.334011 ignition[831]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:45.334022 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:45.334152 ignition[831]: parsed url from cmdline: "" Mar 19 11:46:45.334156 ignition[831]: no config URL provided Mar 19 11:46:45.334164 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:46:45.334175 ignition[831]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:46:45.334182 ignition[831]: failed to fetch config: resource requires networking Mar 19 11:46:45.334440 ignition[831]: Ignition finished successfully Mar 19 11:46:45.365108 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:46:45.380807 ignition[887]: Ignition 2.20.0 Mar 19 11:46:45.380818 ignition[887]: Stage: fetch Mar 19 11:46:45.381050 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:45.381063 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:45.381168 ignition[887]: parsed url from cmdline: "" Mar 19 11:46:45.381171 ignition[887]: no config URL provided Mar 19 11:46:45.381176 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:46:45.381182 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:46:45.382663 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 19 11:46:45.471587 ignition[887]: GET result: OK Mar 19 11:46:45.471656 ignition[887]: config has been read from IMDS userdata Mar 19 11:46:45.471677 ignition[887]: parsing config with SHA512: dda9b5fd2e13fbe2acda99a68d55ec02fb89a1b291dce0f5bf88f6c47f29fbbc35ce788ea127dc634dbf6ce564eb0e43cbbe7af36250739d852b6b314727eb55 Mar 19 11:46:45.479088 unknown[887]: fetched base config from "system" Mar 19 11:46:45.479105 unknown[887]: fetched base config from "system" Mar 19 11:46:45.481586 ignition[887]: fetch: fetch complete Mar 19 11:46:45.479113 unknown[887]: fetched user config from "azure" Mar 19 11:46:45.481594 ignition[887]: fetch: fetch passed Mar 19 11:46:45.488048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:46:45.481661 ignition[887]: Ignition finished successfully Mar 19 11:46:45.502151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:46:45.520918 ignition[894]: Ignition 2.20.0 Mar 19 11:46:45.520930 ignition[894]: Stage: kargs Mar 19 11:46:45.524003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:46:45.521152 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:45.521165 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:45.521903 ignition[894]: kargs: kargs passed Mar 19 11:46:45.521951 ignition[894]: Ignition finished successfully Mar 19 11:46:45.537030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:46:45.551332 ignition[900]: Ignition 2.20.0 Mar 19 11:46:45.551344 ignition[900]: Stage: disks Mar 19 11:46:45.551573 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:45.551586 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:45.554924 ignition[900]: disks: disks passed Mar 19 11:46:45.554974 ignition[900]: Ignition finished successfully Mar 19 11:46:45.562088 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:46:45.569308 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:46:45.579389 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:46:45.585717 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:46:45.588381 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:46:45.593613 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:46:45.605964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:46:45.630303 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 19 11:46:45.637091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:46:45.834901 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:46:45.925786 kernel: EXT4-fs (sda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 11:46:45.926417 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:46:45.930992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:46:45.949868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:46:45.965038 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Mar 19 11:46:45.961178 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:46:45.967904 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 11:46:45.978295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:46:45.990923 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:45.990950 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:45.990961 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:46:45.990972 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:46:45.978389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:46:45.998326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:46:46.000971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:46:46.013929 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:46:46.197197 coreos-metadata[921]: Mar 19 11:46:46.197 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:46:46.203033 coreos-metadata[921]: Mar 19 11:46:46.202 INFO Fetch successful Mar 19 11:46:46.205820 coreos-metadata[921]: Mar 19 11:46:46.205 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:46:46.213983 coreos-metadata[921]: Mar 19 11:46:46.213 INFO Fetch successful Mar 19 11:46:46.218840 coreos-metadata[921]: Mar 19 11:46:46.218 INFO wrote hostname ci-4230.1.0-a-c3eb9cf52f to /sysroot/etc/hostname Mar 19 11:46:46.228983 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:46:46.236954 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:46:46.256193 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:46:46.271040 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:46:46.277426 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:46:46.396034 systemd-networkd[877]: enP1651s1: Gained IPv6LL Mar 19 11:46:46.546150 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:46:46.556875 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:46:46.564935 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:46:46.572209 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:46.575885 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:46:46.598810 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:46:46.605195 ignition[1038]: INFO : Ignition 2.20.0 Mar 19 11:46:46.605195 ignition[1038]: INFO : Stage: mount Mar 19 11:46:46.610328 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:46.610328 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:46.610328 ignition[1038]: INFO : mount: mount passed Mar 19 11:46:46.610328 ignition[1038]: INFO : Ignition finished successfully Mar 19 11:46:46.623183 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:46:46.635864 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:46:46.837983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:46:46.851788 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Mar 19 11:46:46.857740 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:46.857821 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:46.860242 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:46:46.865786 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:46:46.867020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:46:46.895604 ignition[1067]: INFO : Ignition 2.20.0 Mar 19 11:46:46.895604 ignition[1067]: INFO : Stage: files Mar 19 11:46:46.900403 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:46.900403 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:46.906752 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:46:46.910218 systemd-networkd[877]: eth0: Gained IPv6LL Mar 19 11:46:46.914568 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:46:46.914568 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:46:46.951795 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:46:46.955589 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:46:46.960187 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 19 11:46:46.956053 unknown[1067]: wrote ssh authorized keys file for user: core Mar 19 11:46:47.531479 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 19 11:46:47.919222 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:46:47.924879 ignition[1067]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:46:47.929318 ignition[1067]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:46:47.929318 ignition[1067]: INFO : files: files passed Mar 19 11:46:47.935385 ignition[1067]: INFO : Ignition finished successfully Mar 19 11:46:47.937870 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:46:47.948968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:46:47.954969 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:46:47.961718 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:46:47.964137 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:46:47.975302 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:47.975302 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:47.983166 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:47.987996 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:46:47.991357 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:46:48.002954 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:46:48.031907 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:46:48.032026 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:46:48.041308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:46:48.046447 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:46:48.049022 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:46:48.060005 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:46:48.077119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:46:48.093937 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:46:48.106228 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:48.106444 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:48.107315 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:46:48.107726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:46:48.107937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:46:48.108639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:46:48.109111 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:46:48.109508 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:46:48.109967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:46:48.110517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:46:48.110956 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:46:48.111369 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:46:48.111809 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:46:48.112208 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:46:48.112614 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:46:48.113008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:46:48.113143 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:46:48.114022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:48.114473 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:48.114855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:46:48.209812 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:48.216856 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:46:48.217041 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:46:48.224996 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:46:48.228752 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:46:48.235476 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:46:48.238029 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:46:48.243357 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 11:46:48.246294 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:46:48.257988 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:46:48.265982 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:46:48.270751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:46:48.273474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:48.275780 ignition[1120]: INFO : Ignition 2.20.0 Mar 19 11:46:48.275780 ignition[1120]: INFO : Stage: umount Mar 19 11:46:48.275780 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:48.275780 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:46:48.275780 ignition[1120]: INFO : umount: umount passed Mar 19 11:46:48.281512 ignition[1120]: INFO : Ignition finished successfully Mar 19 11:46:48.294587 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:46:48.294820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:46:48.305028 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:46:48.305136 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:46:48.312584 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:46:48.312828 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:46:48.318117 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:46:48.318249 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:46:48.324564 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:46:48.324627 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:46:48.331921 systemd[1]: Stopped target network.target - Network. Mar 19 11:46:48.339273 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:46:48.339344 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:46:48.345324 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:46:48.352215 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:46:48.357182 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:48.360489 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:46:48.366240 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:46:48.372783 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:46:48.377139 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:46:48.383774 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:46:48.383831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:46:48.391193 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:46:48.393446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:46:48.398389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:46:48.398463 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:46:48.403665 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:46:48.410877 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:46:48.418074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:46:48.419099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:46:48.422121 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:46:48.433733 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:46:48.436331 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:46:48.444192 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:46:48.447425 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:46:48.447566 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:46:48.454606 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:46:48.456996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:46:48.457080 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:48.473893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:46:48.476458 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:46:48.478961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:46:48.482149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:46:48.482208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:48.495596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:46:48.495668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:48.506323 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:46:48.506412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:48.513998 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:48.531353 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:46:48.531457 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:48.543509 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:46:48.543812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:48.554009 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:46:48.554087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:48.561857 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:46:48.561906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:48.566915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:46:48.566977 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:46:48.576441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:46:48.576507 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:46:48.581495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:46:48.581548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:48.597957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:46:48.606935 kernel: hv_netvsc 7c1e5276-add7-7c1e-5276-add77c1e5276 eth0: Data path switched from VF: enP1651s1 Mar 19 11:46:48.606917 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:46:48.606988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:48.616837 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:46:48.616909 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:46:48.626308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:46:48.626385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:48.634785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:48.637198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:48.645971 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:46:48.646051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:48.653256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:46:48.655823 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:46:48.662551 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:46:48.662665 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:46:48.708888 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:46:48.709050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:46:48.714512 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:46:48.719075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:46:48.719151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:46:48.738978 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:46:49.149747 systemd[1]: Switching root. Mar 19 11:46:49.194144 systemd-journald[177]: Journal stopped Mar 19 11:46:51.363361 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 19 11:46:51.363396 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:46:51.363408 kernel: SELinux: policy capability open_perms=1 Mar 19 11:46:51.363418 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:46:51.363426 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:46:51.363437 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:46:51.363447 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:46:51.363460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:46:51.363469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:46:51.363477 kernel: audit: type=1403 audit(1742384809.511:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:46:51.363490 systemd[1]: Successfully loaded SELinux policy in 73.451ms. Mar 19 11:46:51.363500 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.886ms. Mar 19 11:46:51.363512 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:46:51.363522 systemd[1]: Detected virtualization microsoft. Mar 19 11:46:51.363539 systemd[1]: Detected architecture x86-64. Mar 19 11:46:51.363548 systemd[1]: Detected first boot. Mar 19 11:46:51.363561 systemd[1]: Hostname set to . Mar 19 11:46:51.363571 systemd[1]: Initializing machine ID from random generator. Mar 19 11:46:51.363583 zram_generator::config[1166]: No configuration found. Mar 19 11:46:51.363601 kernel: Guest personality initialized and is inactive Mar 19 11:46:51.363612 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Mar 19 11:46:51.363636 kernel: Initialized host personality Mar 19 11:46:51.363650 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:46:51.363663 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:46:51.363679 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:46:51.363692 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:46:51.363706 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:46:51.363725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:46:51.363739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:46:51.363755 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:46:51.363893 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:46:51.363909 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:46:51.363922 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:46:51.363934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:46:51.363954 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:46:51.363969 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:46:51.363985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:51.364000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:51.364017 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:46:51.364034 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:46:51.364057 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:46:51.364077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:46:51.364095 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:46:51.364114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:51.364131 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:46:51.364148 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:46:51.364165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:46:51.364180 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:46:51.364195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:51.364211 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:46:51.364280 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:46:51.364298 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:46:51.364315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:46:51.364332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:46:51.364356 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:46:51.364372 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:51.364392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:51.364408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:51.364424 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:46:51.364439 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:46:51.364455 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:46:51.364469 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:46:51.364482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:51.364501 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:46:51.364516 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:46:51.364531 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:46:51.364549 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:46:51.364566 systemd[1]: Reached target machines.target - Containers. Mar 19 11:46:51.364583 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:46:51.364598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:51.364613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:46:51.364630 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:46:51.364646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:51.364662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:46:51.364677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:51.364692 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:46:51.364707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:51.364722 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:46:51.364738 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:46:51.364756 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:46:51.364784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:46:51.364800 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:46:51.364816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:51.364832 kernel: loop: module loaded Mar 19 11:46:51.364847 kernel: ACPI: bus type drm_connector registered Mar 19 11:46:51.364860 kernel: fuse: init (API version 7.39) Mar 19 11:46:51.364875 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:46:51.364897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:46:51.364914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:46:51.364957 systemd-journald[1273]: Collecting audit messages is disabled. Mar 19 11:46:51.364995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:46:51.365016 systemd-journald[1273]: Journal started Mar 19 11:46:51.365049 systemd-journald[1273]: Runtime Journal (/run/log/journal/1fb72737d1324fada2cc79a3cb0ff2a8) is 8M, max 158.8M, 150.8M free. Mar 19 11:46:50.766591 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:46:50.778757 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 11:46:50.779217 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:46:51.381802 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:46:51.387839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:46:51.395379 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:46:51.395456 systemd[1]: Stopped verity-setup.service. Mar 19 11:46:51.404785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:51.409787 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:46:51.413424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:46:51.416306 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:46:51.423433 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:46:51.426065 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:46:51.428973 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:46:51.435102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:46:51.438400 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:46:51.442120 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:51.447666 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:46:51.448285 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:46:51.452565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:51.454081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:51.457545 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:46:51.458947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:46:51.462460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:51.463849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:51.467407 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:46:51.467657 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:46:51.470997 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:51.471529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:51.475778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:51.479305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:46:51.483193 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:46:51.487076 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:46:51.502825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:51.508021 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:46:51.514899 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:46:51.521844 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:46:51.524617 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:46:51.524695 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:46:51.528403 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:46:51.533573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:46:51.538974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:46:51.541599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:51.548063 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:46:51.555964 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:46:51.558825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:46:51.561174 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:46:51.564423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:46:51.566062 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:46:51.573952 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:46:51.585932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:46:51.591413 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:46:51.599373 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:46:51.604042 systemd-journald[1273]: Time spent on flushing to /var/log/journal/1fb72737d1324fada2cc79a3cb0ff2a8 is 29.539ms for 959 entries. Mar 19 11:46:51.604042 systemd-journald[1273]: System Journal (/var/log/journal/1fb72737d1324fada2cc79a3cb0ff2a8) is 8M, max 2.6G, 2.6G free. Mar 19 11:46:51.684901 systemd-journald[1273]: Received client request to flush runtime journal. Mar 19 11:46:51.684969 kernel: loop0: detected capacity change from 0 to 210664 Mar 19 11:46:51.608414 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:46:51.612091 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:46:51.616320 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:46:51.624608 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:46:51.660970 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:46:51.665481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:51.680925 udevadm[1312]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:46:51.691302 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:46:51.704990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:46:51.722804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:46:51.730823 kernel: loop1: detected capacity change from 0 to 147912 Mar 19 11:46:51.724106 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:46:51.731164 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Mar 19 11:46:51.731187 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Mar 19 11:46:51.738954 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:46:51.752529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:46:51.819598 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:46:51.829981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:46:51.853780 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Mar 19 11:46:51.853816 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Mar 19 11:46:51.861157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:51.932793 kernel: loop2: detected capacity change from 0 to 138176 Mar 19 11:46:52.076790 kernel: loop3: detected capacity change from 0 to 28272 Mar 19 11:46:52.256032 kernel: loop4: detected capacity change from 0 to 210664 Mar 19 11:46:52.266882 kernel: loop5: detected capacity change from 0 to 147912 Mar 19 11:46:52.281830 kernel: loop6: detected capacity change from 0 to 138176 Mar 19 11:46:52.301790 kernel: loop7: detected capacity change from 0 to 28272 Mar 19 11:46:52.306373 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 19 11:46:52.307747 (sd-merge)[1336]: Merged extensions into '/usr'. Mar 19 11:46:52.314688 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:46:52.314852 systemd[1]: Reloading... Mar 19 11:46:52.394828 zram_generator::config[1360]: No configuration found. Mar 19 11:46:52.650538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:46:52.731050 systemd[1]: Reloading finished in 415 ms. Mar 19 11:46:52.751896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:46:52.762017 systemd[1]: Starting ensure-sysext.service... Mar 19 11:46:52.768579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:46:52.806061 systemd[1]: Reload requested from client PID 1422 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:46:52.806253 systemd[1]: Reloading... Mar 19 11:46:52.811514 systemd-tmpfiles[1423]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:46:52.811896 systemd-tmpfiles[1423]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:46:52.817953 systemd-tmpfiles[1423]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:46:52.820263 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Mar 19 11:46:52.821604 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Mar 19 11:46:52.836399 systemd-tmpfiles[1423]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:46:52.836413 systemd-tmpfiles[1423]: Skipping /boot Mar 19 11:46:52.881390 systemd-tmpfiles[1423]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:46:52.881406 systemd-tmpfiles[1423]: Skipping /boot Mar 19 11:46:52.934269 zram_generator::config[1453]: No configuration found. Mar 19 11:46:53.073963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:46:53.163628 systemd[1]: Reloading finished in 356 ms. Mar 19 11:46:53.176781 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:46:53.193642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:53.206631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.215045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:46:53.236305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:46:53.241174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:53.246877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:53.254087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:53.265270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:53.268532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:53.268824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:53.278404 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:46:53.287161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:46:53.303864 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:53.319047 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:46:53.322226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.329603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:53.329902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:53.334189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:53.334414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:53.338422 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:53.338665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:53.343911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:46:53.358500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.358746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:53.369073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:53.378092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:53.382350 systemd-udevd[1537]: Using default interface naming scheme 'v255'. Mar 19 11:46:53.391379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:53.394440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:53.394643 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:53.396262 augenrules[1549]: No rules Mar 19 11:46:53.399358 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:46:53.402560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.407210 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:46:53.407588 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:46:53.412210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:53.412453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:53.416206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:53.416425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:53.421167 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:53.421394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:53.441126 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Mar 19 11:46:53.445143 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.453036 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:46:53.455747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:53.459522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:53.471125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:46:53.477073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:53.481796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:53.484669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:53.484859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:53.485112 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:46:53.489938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:53.494273 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:46:53.494525 augenrules[1559]: /sbin/augenrules: No change Mar 19 11:46:53.498686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:53.498952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:53.502675 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:46:53.502923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:46:53.506795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:53.507128 augenrules[1580]: No rules Mar 19 11:46:53.507019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:53.511034 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:46:53.511245 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:46:53.514338 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:53.514521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:53.520103 systemd[1]: Finished ensure-sysext.service. Mar 19 11:46:53.527937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:46:53.528022 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:46:53.705040 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:46:53.761130 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:53.781990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:46:53.834664 systemd-resolved[1535]: Positive Trust Anchors: Mar 19 11:46:53.836140 systemd-resolved[1535]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:46:53.836320 systemd-resolved[1535]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:46:53.864044 systemd-resolved[1535]: Using system hostname 'ci-4230.1.0-a-c3eb9cf52f'. Mar 19 11:46:53.868243 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:46:53.873511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:53.947153 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:46:54.031740 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Mar 19 11:46:54.056269 systemd-networkd[1602]: lo: Link UP Mar 19 11:46:54.056280 systemd-networkd[1602]: lo: Gained carrier Mar 19 11:46:54.063259 systemd-networkd[1602]: Enumeration completed Mar 19 11:46:54.064990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:54.070888 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:54.070902 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:46:54.071633 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:46:54.076898 systemd[1]: Reached target network.target - Network. Mar 19 11:46:54.086949 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:46:54.092865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:46:54.143795 kernel: mlx5_core 0673:00:02.0 enP1651s1: Link up Mar 19 11:46:54.165934 kernel: hv_netvsc 7c1e5276-add7-7c1e-5276-add77c1e5276 eth0: Data path switched to VF: enP1651s1 Mar 19 11:46:54.167587 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:54.167788 systemd-networkd[1602]: enP1651s1: Link UP Mar 19 11:46:54.168021 systemd-networkd[1602]: eth0: Link UP Mar 19 11:46:54.168028 systemd-networkd[1602]: eth0: Gained carrier Mar 19 11:46:54.168053 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:54.179277 systemd-networkd[1602]: enP1651s1: Gained carrier Mar 19 11:46:54.217077 kernel: hv_vmbus: registering driver hv_balloon Mar 19 11:46:54.217164 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 19 11:46:54.230819 systemd-networkd[1602]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 19 11:46:54.261818 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:46:54.271949 kernel: hv_vmbus: registering driver hyperv_fb Mar 19 11:46:54.271987 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 19 11:46:54.276156 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:46:54.276202 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 19 11:46:54.282796 kernel: Console: switching to colour dummy device 80x25 Mar 19 11:46:54.286343 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:46:54.315162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:54.315385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:54.323847 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:54.332907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:54.523962 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1608) Mar 19 11:46:54.610564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:46:54.624956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:46:54.716960 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:46:54.754077 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 19 11:46:54.779029 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:46:54.788002 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:46:54.817640 lvm[1712]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:46:54.853723 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:46:54.854210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:54.864038 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:46:54.871563 lvm[1715]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:46:54.900033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:46:54.956645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:55.295708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:46:55.302525 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:46:55.355977 systemd-networkd[1602]: enP1651s1: Gained IPv6LL Mar 19 11:46:55.726864 ldconfig[1304]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:46:55.739928 systemd-networkd[1602]: eth0: Gained IPv6LL Mar 19 11:46:55.742864 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:46:55.747150 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:46:55.751218 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:46:55.758961 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:46:55.774254 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:46:55.777996 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:46:55.781042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:46:55.784349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:46:55.787856 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:46:55.790641 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:46:55.793968 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:46:55.797364 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:46:55.797406 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:46:55.799773 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:46:55.803293 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:46:55.807806 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:46:55.813123 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:46:55.816589 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:46:55.819899 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:46:55.833508 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:46:55.836783 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:46:55.840819 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:46:55.844052 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:46:55.846511 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:46:55.849078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:46:55.849116 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:46:55.865890 systemd[1]: Starting chronyd.service - NTP client/server... Mar 19 11:46:55.871917 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:46:55.880996 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:46:55.885948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:46:55.890317 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:46:55.898201 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:46:55.900824 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:46:55.900873 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 19 11:46:55.902947 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 19 11:46:55.905611 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 19 11:46:55.908884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:55.913672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:46:55.925962 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:46:55.931942 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:46:55.934577 KVP[1734]: KVP starting; pid is:1734 Mar 19 11:46:55.937128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:46:55.949629 kernel: hv_utils: KVP IC version 4.0 Mar 19 11:46:55.948291 KVP[1734]: KVP LIC Version: 3.1 Mar 19 11:46:55.951990 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:46:55.955913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:46:55.956613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:46:55.964955 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:46:55.971650 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:46:55.990402 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:46:55.990675 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:46:56.010073 (chronyd)[1726]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 19 11:46:56.019057 chronyd[1755]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 19 11:46:56.065790 jq[1745]: true Mar 19 11:46:56.075794 jq[1730]: false Mar 19 11:46:56.081362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:46:56.082723 (ntainerd)[1759]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:46:56.083695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:46:56.092667 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:46:56.093047 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:46:56.098065 jq[1763]: true Mar 19 11:46:56.163857 chronyd[1755]: Timezone right/UTC failed leap second check, ignoring Mar 19 11:46:56.164599 chronyd[1755]: Loaded seccomp filter (level 2) Mar 19 11:46:56.168003 systemd[1]: Started chronyd.service - NTP client/server. Mar 19 11:46:56.174632 extend-filesystems[1731]: Found loop4 Mar 19 11:46:56.174632 extend-filesystems[1731]: Found loop5 Mar 19 11:46:56.174632 extend-filesystems[1731]: Found loop6 Mar 19 11:46:56.174632 extend-filesystems[1731]: Found loop7 Mar 19 11:46:56.174632 extend-filesystems[1731]: Found sda Mar 19 11:46:56.174632 extend-filesystems[1731]: Found sda1 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda2 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda3 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found usr Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda4 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda6 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda7 Mar 19 11:46:56.186660 extend-filesystems[1731]: Found sda9 Mar 19 11:46:56.186660 extend-filesystems[1731]: Checking size of /dev/sda9 Mar 19 11:46:56.238371 systemd-logind[1741]: New seat seat0. Mar 19 11:46:56.242948 extend-filesystems[1731]: Old size kept for /dev/sda9 Mar 19 11:46:56.242948 extend-filesystems[1731]: Found sr0 Mar 19 11:46:56.301925 update_engine[1743]: I20250319 11:46:56.261116 1743 main.cc:92] Flatcar Update Engine starting Mar 19 11:46:56.258311 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:46:56.259168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:46:56.278898 systemd-logind[1741]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:46:56.288495 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:46:56.291867 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:46:56.322949 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1608) Mar 19 11:46:56.624000 dbus-daemon[1729]: [system] SELinux support is enabled Mar 19 11:46:56.624584 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:46:56.636649 update_engine[1743]: I20250319 11:46:56.635506 1743 update_check_scheduler.cc:74] Next update check in 4m53s Mar 19 11:46:56.636518 dbus-daemon[1729]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 19 11:46:56.635674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:46:56.635711 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:46:56.640285 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:46:56.640320 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:46:56.643433 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:46:56.655579 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:46:56.659511 sshd_keygen[1756]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:46:56.667117 bash[1783]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:46:56.672145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:46:56.678364 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:46:56.711565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:46:56.711820 coreos-metadata[1728]: Mar 19 11:46:56.711 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:46:56.718623 coreos-metadata[1728]: Mar 19 11:46:56.717 INFO Fetch successful Mar 19 11:46:56.718623 coreos-metadata[1728]: Mar 19 11:46:56.718 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 19 11:46:56.722463 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:46:56.726136 coreos-metadata[1728]: Mar 19 11:46:56.725 INFO Fetch successful Mar 19 11:46:56.726136 coreos-metadata[1728]: Mar 19 11:46:56.725 INFO Fetching http://168.63.129.16/machine/98b70117-0615-4e8e-afd6-4177248b9883/fba0b371%2De42f%2D4f8b%2D80d9%2Dd2f595ef3879.%5Fci%2D4230.1.0%2Da%2Dc3eb9cf52f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 19 11:46:56.728675 coreos-metadata[1728]: Mar 19 11:46:56.728 INFO Fetch successful Mar 19 11:46:56.728675 coreos-metadata[1728]: Mar 19 11:46:56.728 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:46:56.730608 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 19 11:46:56.744511 coreos-metadata[1728]: Mar 19 11:46:56.744 INFO Fetch successful Mar 19 11:46:56.754333 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:46:56.754596 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:46:56.782971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:46:56.788275 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 19 11:46:56.800734 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:46:56.805689 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:46:56.824237 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:46:56.835587 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:46:56.841951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:46:56.845160 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:46:57.275296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:57.283277 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:46:57.314494 locksmithd[1849]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:46:57.886347 kubelet[1884]: E0319 11:46:57.886259 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:46:57.888991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:46:57.889232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:46:57.889839 systemd[1]: kubelet.service: Consumed 894ms CPU time, 242.8M memory peak. Mar 19 11:46:58.618297 containerd[1759]: time="2025-03-19T11:46:58.618204500Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:46:58.641392 containerd[1759]: time="2025-03-19T11:46:58.641327300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.642938 containerd[1759]: time="2025-03-19T11:46:58.642885600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:58.642938 containerd[1759]: time="2025-03-19T11:46:58.642919000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:46:58.642938 containerd[1759]: time="2025-03-19T11:46:58.642941600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:46:58.643156 containerd[1759]: time="2025-03-19T11:46:58.643133100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:46:58.643203 containerd[1759]: time="2025-03-19T11:46:58.643161600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643269 containerd[1759]: time="2025-03-19T11:46:58.643247600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643307 containerd[1759]: time="2025-03-19T11:46:58.643267700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643518 containerd[1759]: time="2025-03-19T11:46:58.643492200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643518 containerd[1759]: time="2025-03-19T11:46:58.643514300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643618 containerd[1759]: time="2025-03-19T11:46:58.643537700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643618 containerd[1759]: time="2025-03-19T11:46:58.643549900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643731 containerd[1759]: time="2025-03-19T11:46:58.643658300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.643996 containerd[1759]: time="2025-03-19T11:46:58.643966700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:58.644163 containerd[1759]: time="2025-03-19T11:46:58.644139200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:58.644163 containerd[1759]: time="2025-03-19T11:46:58.644158300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:46:58.644278 containerd[1759]: time="2025-03-19T11:46:58.644258500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:46:58.644340 containerd[1759]: time="2025-03-19T11:46:58.644321100Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:46:58.964295 containerd[1759]: time="2025-03-19T11:46:58.964154100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:46:58.964295 containerd[1759]: time="2025-03-19T11:46:58.964246500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:46:58.964295 containerd[1759]: time="2025-03-19T11:46:58.964288800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:46:58.964532 containerd[1759]: time="2025-03-19T11:46:58.964319900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:46:58.964532 containerd[1759]: time="2025-03-19T11:46:58.964344400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:46:58.964639 containerd[1759]: time="2025-03-19T11:46:58.964612200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965020600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965208700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965235300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965261300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965289500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965314600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965335300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965360200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965386500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965409200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965430600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965451900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965483900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.966805 containerd[1759]: time="2025-03-19T11:46:58.965509200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965529400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965552300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965574100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965599100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965621200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965644100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965666900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965695300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965714800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965735200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965755500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965809600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965854500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965873800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967207 containerd[1759]: time="2025-03-19T11:46:58.965890200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.965955300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.965980300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.965995500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.966011900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.966028900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.966064000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.966084400Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:46:58.967530 containerd[1759]: time="2025-03-19T11:46:58.966098300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:46:58.967695 containerd[1759]: time="2025-03-19T11:46:58.966498000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:46:58.967695 containerd[1759]: time="2025-03-19T11:46:58.966568600Z" level=info msg="Connect containerd service" Mar 19 11:46:58.967695 containerd[1759]: time="2025-03-19T11:46:58.966628100Z" level=info msg="using legacy CRI server" Mar 19 11:46:58.967695 containerd[1759]: time="2025-03-19T11:46:58.966639600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:46:58.967695 containerd[1759]: time="2025-03-19T11:46:58.966815400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:46:58.970012 containerd[1759]: time="2025-03-19T11:46:58.969293800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:46:58.970365 containerd[1759]: time="2025-03-19T11:46:58.970337800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:46:58.970675 containerd[1759]: time="2025-03-19T11:46:58.970626400Z" level=info msg="Start subscribing containerd event" Mar 19 11:46:58.970813 containerd[1759]: time="2025-03-19T11:46:58.970796500Z" level=info msg="Start recovering state" Mar 19 11:46:58.970947 containerd[1759]: time="2025-03-19T11:46:58.970662100Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:46:58.971004 containerd[1759]: time="2025-03-19T11:46:58.970936000Z" level=info msg="Start event monitor" Mar 19 11:46:58.971004 containerd[1759]: time="2025-03-19T11:46:58.970980300Z" level=info msg="Start snapshots syncer" Mar 19 11:46:58.971004 containerd[1759]: time="2025-03-19T11:46:58.970993800Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:46:58.971106 containerd[1759]: time="2025-03-19T11:46:58.971004000Z" level=info msg="Start streaming server" Mar 19 11:46:58.971194 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:46:58.974286 containerd[1759]: time="2025-03-19T11:46:58.971437900Z" level=info msg="containerd successfully booted in 0.354246s" Mar 19 11:46:58.974799 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:46:58.978850 systemd[1]: Startup finished in 535ms (firmware) + 9.189s (loader) + 1.002s (kernel) + 8.132s (initrd) + 9.539s (userspace) = 28.398s. Mar 19 11:46:59.409706 login[1879]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 19 11:46:59.410120 login[1878]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 11:46:59.422447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:46:59.428068 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:46:59.432827 systemd-logind[1741]: New session 1 of user core. Mar 19 11:46:59.445657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:46:59.453088 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:46:59.462322 (systemd)[1912]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:46:59.465044 systemd-logind[1741]: New session c1 of user core. Mar 19 11:46:59.633487 systemd[1912]: Queued start job for default target default.target. Mar 19 11:46:59.642894 systemd[1912]: Created slice app.slice - User Application Slice. Mar 19 11:46:59.642931 systemd[1912]: Reached target paths.target - Paths. Mar 19 11:46:59.642983 systemd[1912]: Reached target timers.target - Timers. Mar 19 11:46:59.644400 systemd[1912]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:46:59.656070 systemd[1912]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:46:59.656303 systemd[1912]: Reached target sockets.target - Sockets. Mar 19 11:46:59.656358 systemd[1912]: Reached target basic.target - Basic System. Mar 19 11:46:59.656411 systemd[1912]: Reached target default.target - Main User Target. Mar 19 11:46:59.656445 systemd[1912]: Startup finished in 183ms. Mar 19 11:46:59.656615 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:46:59.663964 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:47:00.279355 waagent[1872]: 2025-03-19T11:47:00.279232Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.279809Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.279957Z INFO Daemon Daemon Python: 3.11.11 Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.280208Z INFO Daemon Daemon Run daemon Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.280381Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.280460Z INFO Daemon Daemon Using waagent for provisioning Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.280634Z INFO Daemon Daemon Activate resource disk Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.280756Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.294612Z INFO Daemon Daemon Found device: None Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.294784Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.294877Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.296007Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:47:00.328413 waagent[1872]: 2025-03-19T11:47:00.296158Z INFO Daemon Daemon Running default provisioning handler Mar 19 11:47:00.332173 waagent[1872]: 2025-03-19T11:47:00.332063Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 19 11:47:00.351222 waagent[1872]: 2025-03-19T11:47:00.339182Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 19 11:47:00.351222 waagent[1872]: 2025-03-19T11:47:00.341366Z INFO Daemon Daemon cloud-init is enabled: False Mar 19 11:47:00.351222 waagent[1872]: 2025-03-19T11:47:00.343635Z INFO Daemon Daemon Copying ovf-env.xml Mar 19 11:47:00.399796 waagent[1872]: 2025-03-19T11:47:00.391832Z INFO Daemon Daemon Successfully mounted dvd Mar 19 11:47:00.413198 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 11:47:00.415004 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 19 11:47:00.418626 waagent[1872]: 2025-03-19T11:47:00.417628Z INFO Daemon Daemon Detect protocol endpoint Mar 19 11:47:00.422323 systemd-logind[1741]: New session 2 of user core. Mar 19 11:47:00.429364 waagent[1872]: 2025-03-19T11:47:00.429251Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:47:00.429744 waagent[1872]: 2025-03-19T11:47:00.429691Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 19 11:47:00.433346 waagent[1872]: 2025-03-19T11:47:00.430797Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 19 11:47:00.438931 waagent[1872]: 2025-03-19T11:47:00.436735Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 19 11:47:00.444379 waagent[1872]: 2025-03-19T11:47:00.442089Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 19 11:47:00.449096 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:47:00.617641 waagent[1872]: 2025-03-19T11:47:00.617492Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 19 11:47:00.625573 waagent[1872]: 2025-03-19T11:47:00.618061Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 19 11:47:00.625573 waagent[1872]: 2025-03-19T11:47:00.618933Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 19 11:47:00.712259 waagent[1872]: 2025-03-19T11:47:00.712135Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 19 11:47:00.716241 waagent[1872]: 2025-03-19T11:47:00.716149Z INFO Daemon Daemon Forcing an update of the goal state. Mar 19 11:47:00.724433 waagent[1872]: 2025-03-19T11:47:00.724376Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:47:00.742469 waagent[1872]: 2025-03-19T11:47:00.742401Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 19 11:47:00.761698 waagent[1872]: 2025-03-19T11:47:00.743200Z INFO Daemon Mar 19 11:47:00.761698 waagent[1872]: 2025-03-19T11:47:00.745719Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 31c8ba6a-960f-49ee-a7a2-29f1b91e6e4f eTag: 11122167065659645387 source: Fabric] Mar 19 11:47:00.761698 waagent[1872]: 2025-03-19T11:47:00.749868Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 19 11:47:00.761698 waagent[1872]: 2025-03-19T11:47:00.751623Z INFO Daemon Mar 19 11:47:00.761698 waagent[1872]: 2025-03-19T11:47:00.752700Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:47:00.765424 waagent[1872]: 2025-03-19T11:47:00.765374Z INFO Daemon Daemon Downloading artifacts profile blob Mar 19 11:47:00.897639 waagent[1872]: 2025-03-19T11:47:00.897478Z INFO Daemon Downloaded certificate {'thumbprint': 'C24B3FB0DED2F2FB639789A8A8D755B8B0424711', 'hasPrivateKey': True} Mar 19 11:47:00.905026 waagent[1872]: 2025-03-19T11:47:00.904958Z INFO Daemon Downloaded certificate {'thumbprint': '0DEE508D8B968A25925DCF1A52553B62266784E0', 'hasPrivateKey': False} Mar 19 11:47:00.911107 waagent[1872]: 2025-03-19T11:47:00.911045Z INFO Daemon Fetch goal state completed Mar 19 11:47:00.945550 waagent[1872]: 2025-03-19T11:47:00.945469Z INFO Daemon Daemon Starting provisioning Mar 19 11:47:00.971470 waagent[1872]: 2025-03-19T11:47:00.948688Z INFO Daemon Daemon Handle ovf-env.xml. Mar 19 11:47:00.971470 waagent[1872]: 2025-03-19T11:47:00.948917Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-c3eb9cf52f] Mar 19 11:47:00.971470 waagent[1872]: 2025-03-19T11:47:00.951540Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-c3eb9cf52f] Mar 19 11:47:00.971470 waagent[1872]: 2025-03-19T11:47:00.958528Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 19 11:47:00.971470 waagent[1872]: 2025-03-19T11:47:00.961650Z INFO Daemon Daemon Primary interface is [eth0] Mar 19 11:47:00.976336 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:47:00.976346 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:47:00.976401 systemd-networkd[1602]: eth0: DHCP lease lost Mar 19 11:47:00.977661 waagent[1872]: 2025-03-19T11:47:00.977573Z INFO Daemon Daemon Create user account if not exists Mar 19 11:47:00.990529 waagent[1872]: 2025-03-19T11:47:00.989208Z INFO Daemon Daemon User core already exists, skip useradd Mar 19 11:47:00.990529 waagent[1872]: 2025-03-19T11:47:00.989352Z INFO Daemon Daemon Configure sudoer Mar 19 11:47:00.990529 waagent[1872]: 2025-03-19T11:47:00.989817Z INFO Daemon Daemon Configure sshd Mar 19 11:47:00.990529 waagent[1872]: 2025-03-19T11:47:00.990107Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 19 11:47:00.990529 waagent[1872]: 2025-03-19T11:47:00.990198Z INFO Daemon Daemon Deploy ssh public key. Mar 19 11:47:01.039826 systemd-networkd[1602]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 19 11:47:08.139969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:47:08.147396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:08.251744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:08.263114 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:47:08.887452 kubelet[1975]: E0319 11:47:08.887349 1975 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:47:08.891473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:47:08.891665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:47:08.892094 systemd[1]: kubelet.service: Consumed 155ms CPU time, 99.1M memory peak. Mar 19 11:47:19.142484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:47:19.155432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:19.256605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:19.261046 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:47:19.865165 kubelet[1991]: E0319 11:47:19.865102 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:47:19.867856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:47:19.868056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:47:19.868466 systemd[1]: kubelet.service: Consumed 138ms CPU time, 95.8M memory peak. Mar 19 11:47:19.957957 chronyd[1755]: Selected source PHC0 Mar 19 11:47:29.921190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:47:29.931246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:30.024476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:30.028623 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:47:30.634187 kubelet[2007]: E0319 11:47:30.634102 2007 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:47:30.636881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:47:30.637078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:47:30.637537 systemd[1]: kubelet.service: Consumed 142ms CPU time, 100M memory peak. Mar 19 11:47:31.075262 waagent[1872]: 2025-03-19T11:47:31.075193Z INFO Daemon Daemon Provisioning complete Mar 19 11:47:31.091177 waagent[1872]: 2025-03-19T11:47:31.091107Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 19 11:47:31.109278 waagent[1872]: 2025-03-19T11:47:31.091521Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 19 11:47:31.109278 waagent[1872]: 2025-03-19T11:47:31.091696Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 19 11:47:31.234626 waagent[2014]: 2025-03-19T11:47:31.234519Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 19 11:47:31.235094 waagent[2014]: 2025-03-19T11:47:31.234705Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 19 11:47:31.235094 waagent[2014]: 2025-03-19T11:47:31.234814Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 19 11:47:31.252195 waagent[2014]: 2025-03-19T11:47:31.252091Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 19 11:47:31.252453 waagent[2014]: 2025-03-19T11:47:31.252396Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:47:31.252560 waagent[2014]: 2025-03-19T11:47:31.252515Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:47:31.261075 waagent[2014]: 2025-03-19T11:47:31.260992Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:47:31.271989 waagent[2014]: 2025-03-19T11:47:31.271921Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 19 11:47:31.272613 waagent[2014]: 2025-03-19T11:47:31.272548Z INFO ExtHandler Mar 19 11:47:31.272725 waagent[2014]: 2025-03-19T11:47:31.272658Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 90342fbf-612f-47ed-ad3c-d4c61bc90252 eTag: 11122167065659645387 source: Fabric] Mar 19 11:47:31.273069 waagent[2014]: 2025-03-19T11:47:31.273014Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:47:31.273682 waagent[2014]: 2025-03-19T11:47:31.273623Z INFO ExtHandler Mar 19 11:47:31.273758 waagent[2014]: 2025-03-19T11:47:31.273712Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:47:31.278263 waagent[2014]: 2025-03-19T11:47:31.278222Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:47:31.347809 waagent[2014]: 2025-03-19T11:47:31.347635Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C24B3FB0DED2F2FB639789A8A8D755B8B0424711', 'hasPrivateKey': True} Mar 19 11:47:31.348231 waagent[2014]: 2025-03-19T11:47:31.348174Z INFO ExtHandler Downloaded certificate {'thumbprint': '0DEE508D8B968A25925DCF1A52553B62266784E0', 'hasPrivateKey': False} Mar 19 11:47:31.348722 waagent[2014]: 2025-03-19T11:47:31.348664Z INFO ExtHandler Fetch goal state completed Mar 19 11:47:31.362289 waagent[2014]: 2025-03-19T11:47:31.362212Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2014 Mar 19 11:47:31.362478 waagent[2014]: 2025-03-19T11:47:31.362423Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 19 11:47:31.364199 waagent[2014]: 2025-03-19T11:47:31.364138Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 19 11:47:31.364600 waagent[2014]: 2025-03-19T11:47:31.364546Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 19 11:47:31.373433 waagent[2014]: 2025-03-19T11:47:31.373382Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 19 11:47:31.373661 waagent[2014]: 2025-03-19T11:47:31.373613Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 19 11:47:31.380644 waagent[2014]: 2025-03-19T11:47:31.380489Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 19 11:47:31.388035 systemd[1]: Reload requested from client PID 2029 ('systemctl') (unit waagent.service)... Mar 19 11:47:31.388051 systemd[1]: Reloading... Mar 19 11:47:31.480798 zram_generator::config[2071]: No configuration found. Mar 19 11:47:31.605247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:47:31.720928 systemd[1]: Reloading finished in 332 ms. Mar 19 11:47:31.737943 waagent[2014]: 2025-03-19T11:47:31.737387Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 19 11:47:31.747053 systemd[1]: Reload requested from client PID 2124 ('systemctl') (unit waagent.service)... Mar 19 11:47:31.747226 systemd[1]: Reloading... Mar 19 11:47:31.822808 zram_generator::config[2160]: No configuration found. Mar 19 11:47:31.967527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:47:32.083302 systemd[1]: Reloading finished in 335 ms. Mar 19 11:47:32.103042 waagent[2014]: 2025-03-19T11:47:32.101744Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 19 11:47:32.103042 waagent[2014]: 2025-03-19T11:47:32.102045Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 19 11:47:32.279731 waagent[2014]: 2025-03-19T11:47:32.279577Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 19 11:47:32.280454 waagent[2014]: 2025-03-19T11:47:32.280373Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 19 11:47:32.281360 waagent[2014]: 2025-03-19T11:47:32.281293Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 19 11:47:32.281843 waagent[2014]: 2025-03-19T11:47:32.281784Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 19 11:47:32.282014 waagent[2014]: 2025-03-19T11:47:32.281926Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:47:32.282136 waagent[2014]: 2025-03-19T11:47:32.282095Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:47:32.282537 waagent[2014]: 2025-03-19T11:47:32.282478Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 19 11:47:32.282760 waagent[2014]: 2025-03-19T11:47:32.282713Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:47:32.283275 waagent[2014]: 2025-03-19T11:47:32.283223Z INFO EnvHandler ExtHandler Configure routes Mar 19 11:47:32.283353 waagent[2014]: 2025-03-19T11:47:32.283282Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 19 11:47:32.283429 waagent[2014]: 2025-03-19T11:47:32.283375Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:47:32.284175 waagent[2014]: 2025-03-19T11:47:32.284118Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 19 11:47:32.284509 waagent[2014]: 2025-03-19T11:47:32.284397Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 19 11:47:32.284656 waagent[2014]: 2025-03-19T11:47:32.284481Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 19 11:47:32.285266 waagent[2014]: 2025-03-19T11:47:32.285219Z INFO EnvHandler ExtHandler Gateway:None Mar 19 11:47:32.285654 waagent[2014]: 2025-03-19T11:47:32.285591Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 19 11:47:32.285654 waagent[2014]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 19 11:47:32.285654 waagent[2014]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 19 11:47:32.285654 waagent[2014]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 19 11:47:32.285654 waagent[2014]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:47:32.285654 waagent[2014]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:47:32.285654 waagent[2014]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:47:32.286208 waagent[2014]: 2025-03-19T11:47:32.286074Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 19 11:47:32.286208 waagent[2014]: 2025-03-19T11:47:32.286160Z INFO EnvHandler ExtHandler Routes:None Mar 19 11:47:32.292295 waagent[2014]: 2025-03-19T11:47:32.292219Z INFO ExtHandler ExtHandler Mar 19 11:47:32.292392 waagent[2014]: 2025-03-19T11:47:32.292349Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 08e5e976-fc51-4379-b010-abeb4acbee4a correlation f1a48f3a-fbe7-44db-ab05-794714bb00fa created: 2025-03-19T11:46:19.614296Z] Mar 19 11:47:32.294732 waagent[2014]: 2025-03-19T11:47:32.294683Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:47:32.295300 waagent[2014]: 2025-03-19T11:47:32.295252Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 19 11:47:32.311337 waagent[2014]: 2025-03-19T11:47:32.311254Z INFO MonitorHandler ExtHandler Network interfaces: Mar 19 11:47:32.311337 waagent[2014]: Executing ['ip', '-a', '-o', 'link']: Mar 19 11:47:32.311337 waagent[2014]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 19 11:47:32.311337 waagent[2014]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:ad:d7 brd ff:ff:ff:ff:ff:ff Mar 19 11:47:32.311337 waagent[2014]: 3: enP1651s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:ad:d7 brd ff:ff:ff:ff:ff:ff\ altname enP1651p0s2 Mar 19 11:47:32.311337 waagent[2014]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 19 11:47:32.311337 waagent[2014]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 19 11:47:32.311337 waagent[2014]: 2: eth0 inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 19 11:47:32.311337 waagent[2014]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 19 11:47:32.311337 waagent[2014]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 19 11:47:32.311337 waagent[2014]: 2: eth0 inet6 fe80::7e1e:52ff:fe76:add7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:47:32.311337 waagent[2014]: 3: enP1651s1 inet6 fe80::7e1e:52ff:fe76:add7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:47:32.334897 waagent[2014]: 2025-03-19T11:47:32.334834Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0B89C74E-C513-4746-B0AD-58828B183F0B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 19 11:47:32.355064 waagent[2014]: 2025-03-19T11:47:32.354998Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 19 11:47:32.355064 waagent[2014]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.355064 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.355064 waagent[2014]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.355064 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.355064 waagent[2014]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.355064 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.355064 waagent[2014]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:47:32.355064 waagent[2014]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:47:32.355064 waagent[2014]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:47:32.358456 waagent[2014]: 2025-03-19T11:47:32.358399Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 19 11:47:32.358456 waagent[2014]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.358456 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.358456 waagent[2014]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.358456 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.358456 waagent[2014]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:47:32.358456 waagent[2014]: pkts bytes target prot opt in out source destination Mar 19 11:47:32.358456 waagent[2014]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:47:32.358456 waagent[2014]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:47:32.358456 waagent[2014]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:47:32.358899 waagent[2014]: 2025-03-19T11:47:32.358703Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 19 11:47:40.671070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 11:47:40.684041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:40.810850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:40.823115 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:47:41.385731 kubelet[2263]: E0319 11:47:41.385673 2263 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:47:41.388408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:47:41.388607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:47:41.389002 systemd[1]: kubelet.service: Consumed 155ms CPU time, 97.9M memory peak. Mar 19 11:47:42.320813 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 19 11:47:42.332039 update_engine[1743]: I20250319 11:47:42.331944 1743 update_attempter.cc:509] Updating boot flags... Mar 19 11:47:44.314827 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2286) Mar 19 11:47:44.459866 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2285) Mar 19 11:47:51.421045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 11:47:51.434789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:51.546012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:51.554142 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:47:51.598959 kubelet[2393]: E0319 11:47:51.598895 2393 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:47:51.601600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:47:51.601819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:47:51.602370 systemd[1]: kubelet.service: Consumed 151ms CPU time, 97.6M memory peak. Mar 19 11:48:01.671281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 19 11:48:01.683042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:48:01.782468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:01.789097 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:48:02.409681 kubelet[2409]: E0319 11:48:02.409527 2409 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:48:02.412412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:48:02.412630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:48:02.413065 systemd[1]: kubelet.service: Consumed 135ms CPU time, 95.7M memory peak. Mar 19 11:48:06.778246 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:48:06.783073 systemd[1]: Started sshd@0-10.200.8.19:22-10.200.16.10:47062.service - OpenSSH per-connection server daemon (10.200.16.10:47062). Mar 19 11:48:07.575141 sshd[2418]: Accepted publickey for core from 10.200.16.10 port 47062 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:07.577030 sshd-session[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:07.582376 systemd-logind[1741]: New session 3 of user core. Mar 19 11:48:07.585936 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:48:08.149120 systemd[1]: Started sshd@1-10.200.8.19:22-10.200.16.10:47070.service - OpenSSH per-connection server daemon (10.200.16.10:47070). Mar 19 11:48:08.794004 sshd[2423]: Accepted publickey for core from 10.200.16.10 port 47070 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:08.795628 sshd-session[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:08.800231 systemd-logind[1741]: New session 4 of user core. Mar 19 11:48:08.806972 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:48:09.257011 sshd[2425]: Connection closed by 10.200.16.10 port 47070 Mar 19 11:48:09.258057 sshd-session[2423]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:09.262411 systemd[1]: sshd@1-10.200.8.19:22-10.200.16.10:47070.service: Deactivated successfully. Mar 19 11:48:09.265247 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:48:09.266827 systemd-logind[1741]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:48:09.267747 systemd-logind[1741]: Removed session 4. Mar 19 11:48:09.384104 systemd[1]: Started sshd@2-10.200.8.19:22-10.200.16.10:54768.service - OpenSSH per-connection server daemon (10.200.16.10:54768). Mar 19 11:48:10.027688 sshd[2431]: Accepted publickey for core from 10.200.16.10 port 54768 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:10.029257 sshd-session[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:10.036608 systemd-logind[1741]: New session 5 of user core. Mar 19 11:48:10.044970 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:48:10.482583 sshd[2433]: Connection closed by 10.200.16.10 port 54768 Mar 19 11:48:10.483429 sshd-session[2431]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:10.487598 systemd[1]: sshd@2-10.200.8.19:22-10.200.16.10:54768.service: Deactivated successfully. Mar 19 11:48:10.489716 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:48:10.490530 systemd-logind[1741]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:48:10.491456 systemd-logind[1741]: Removed session 5. Mar 19 11:48:10.608198 systemd[1]: Started sshd@3-10.200.8.19:22-10.200.16.10:54782.service - OpenSSH per-connection server daemon (10.200.16.10:54782). Mar 19 11:48:11.253257 sshd[2439]: Accepted publickey for core from 10.200.16.10 port 54782 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:11.254809 sshd-session[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:11.259706 systemd-logind[1741]: New session 6 of user core. Mar 19 11:48:11.265954 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:48:11.710175 sshd[2441]: Connection closed by 10.200.16.10 port 54782 Mar 19 11:48:11.711136 sshd-session[2439]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:11.715620 systemd[1]: sshd@3-10.200.8.19:22-10.200.16.10:54782.service: Deactivated successfully. Mar 19 11:48:11.717604 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:48:11.718398 systemd-logind[1741]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:48:11.719316 systemd-logind[1741]: Removed session 6. Mar 19 11:48:11.829085 systemd[1]: Started sshd@4-10.200.8.19:22-10.200.16.10:54784.service - OpenSSH per-connection server daemon (10.200.16.10:54784). Mar 19 11:48:12.421036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 19 11:48:12.427018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:48:12.474487 sshd[2447]: Accepted publickey for core from 10.200.16.10 port 54784 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:12.475298 sshd-session[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:12.482364 systemd-logind[1741]: New session 7 of user core. Mar 19 11:48:12.491123 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:48:12.603370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:12.607657 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:48:12.649309 kubelet[2458]: E0319 11:48:12.649246 2458 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:48:12.652014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:48:12.652222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:48:12.652755 systemd[1]: kubelet.service: Consumed 151ms CPU time, 97M memory peak. Mar 19 11:48:13.084372 sudo[2466]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:48:13.084735 sudo[2466]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:48:13.100393 sudo[2466]: pam_unix(sudo:session): session closed for user root Mar 19 11:48:13.204138 sshd[2452]: Connection closed by 10.200.16.10 port 54784 Mar 19 11:48:13.205374 sshd-session[2447]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:13.209008 systemd[1]: sshd@4-10.200.8.19:22-10.200.16.10:54784.service: Deactivated successfully. Mar 19 11:48:13.211333 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:48:13.212914 systemd-logind[1741]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:48:13.213971 systemd-logind[1741]: Removed session 7. Mar 19 11:48:13.331081 systemd[1]: Started sshd@5-10.200.8.19:22-10.200.16.10:54786.service - OpenSSH per-connection server daemon (10.200.16.10:54786). Mar 19 11:48:13.972886 sshd[2472]: Accepted publickey for core from 10.200.16.10 port 54786 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:13.974442 sshd-session[2472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:13.979368 systemd-logind[1741]: New session 8 of user core. Mar 19 11:48:13.990943 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:48:14.329260 sudo[2476]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:48:14.329711 sudo[2476]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:48:14.333416 sudo[2476]: pam_unix(sudo:session): session closed for user root Mar 19 11:48:14.338748 sudo[2475]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:48:14.339200 sudo[2475]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:48:14.352164 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:48:14.379935 augenrules[2498]: No rules Mar 19 11:48:14.381403 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:48:14.381682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:48:14.382727 sudo[2475]: pam_unix(sudo:session): session closed for user root Mar 19 11:48:14.488586 sshd[2474]: Connection closed by 10.200.16.10 port 54786 Mar 19 11:48:14.489449 sshd-session[2472]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:14.492549 systemd[1]: sshd@5-10.200.8.19:22-10.200.16.10:54786.service: Deactivated successfully. Mar 19 11:48:14.494804 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:48:14.496630 systemd-logind[1741]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:48:14.497719 systemd-logind[1741]: Removed session 8. Mar 19 11:48:14.610379 systemd[1]: Started sshd@6-10.200.8.19:22-10.200.16.10:54788.service - OpenSSH per-connection server daemon (10.200.16.10:54788). Mar 19 11:48:15.259376 sshd[2507]: Accepted publickey for core from 10.200.16.10 port 54788 ssh2: RSA SHA256:QNBZI8gqF7+eamKHOXmy3Klr/1eBDa4XeBs57RUcH5M Mar 19 11:48:15.260935 sshd-session[2507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:15.265317 systemd-logind[1741]: New session 9 of user core. Mar 19 11:48:15.268934 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:48:15.614481 sudo[2510]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:48:15.614946 sudo[2510]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:48:16.433653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:16.434098 systemd[1]: kubelet.service: Consumed 151ms CPU time, 97M memory peak. Mar 19 11:48:16.442047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:48:16.470853 systemd[1]: Reload requested from client PID 2548 ('systemctl') (unit session-9.scope)... Mar 19 11:48:16.471058 systemd[1]: Reloading... Mar 19 11:48:16.589848 zram_generator::config[2594]: No configuration found. Mar 19 11:48:16.728020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:48:16.843564 systemd[1]: Reloading finished in 371 ms. Mar 19 11:48:16.891969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:16.898051 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:48:16.904303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:48:16.904880 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:48:16.905136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:16.905192 systemd[1]: kubelet.service: Consumed 110ms CPU time, 86.9M memory peak. Mar 19 11:48:16.911161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:48:19.559240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:48:19.566328 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:48:19.613176 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:48:19.614248 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:48:19.614248 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:48:19.614248 kubelet[2671]: I0319 11:48:19.613901 2671 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:48:19.906480 kubelet[2671]: I0319 11:48:19.906348 2671 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 11:48:19.906480 kubelet[2671]: I0319 11:48:19.906383 2671 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:48:19.906660 kubelet[2671]: I0319 11:48:19.906650 2671 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 11:48:19.923333 kubelet[2671]: I0319 11:48:19.923288 2671 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:48:19.935752 kubelet[2671]: I0319 11:48:19.935469 2671 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:48:19.936844 kubelet[2671]: I0319 11:48:19.936791 2671 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:48:19.937036 kubelet[2671]: I0319 11:48:19.936838 2671 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 11:48:19.937213 kubelet[2671]: I0319 11:48:19.937057 2671 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:48:19.937213 kubelet[2671]: I0319 11:48:19.937071 2671 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 11:48:19.937294 kubelet[2671]: I0319 11:48:19.937215 2671 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:48:19.938302 kubelet[2671]: I0319 11:48:19.938281 2671 kubelet.go:400] "Attempting to sync node with API server" Mar 19 11:48:19.938302 kubelet[2671]: I0319 11:48:19.938302 2671 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:48:19.938426 kubelet[2671]: I0319 11:48:19.938330 2671 kubelet.go:312] "Adding apiserver pod source" Mar 19 11:48:19.938426 kubelet[2671]: I0319 11:48:19.938351 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:48:19.938862 kubelet[2671]: E0319 11:48:19.938832 2671 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:19.939122 kubelet[2671]: E0319 11:48:19.939059 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:19.942684 kubelet[2671]: I0319 11:48:19.942450 2671 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:48:19.944781 kubelet[2671]: I0319 11:48:19.944120 2671 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:48:19.944781 kubelet[2671]: W0319 11:48:19.944183 2671 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:48:19.945021 kubelet[2671]: W0319 11:48:19.944998 2671 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 11:48:19.945089 kubelet[2671]: E0319 11:48:19.945035 2671 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 11:48:19.945180 kubelet[2671]: W0319 11:48:19.945143 2671 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 11:48:19.945180 kubelet[2671]: E0319 11:48:19.945168 2671 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 11:48:19.945274 kubelet[2671]: I0319 11:48:19.945014 2671 server.go:1264] "Started kubelet" Mar 19 11:48:19.947501 kubelet[2671]: I0319 11:48:19.947484 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:48:19.954249 kubelet[2671]: I0319 11:48:19.954213 2671 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:48:19.955386 kubelet[2671]: I0319 11:48:19.955296 2671 server.go:455] "Adding debug handlers to kubelet server" Mar 19 11:48:19.958211 kubelet[2671]: I0319 11:48:19.958156 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:48:19.959338 kubelet[2671]: I0319 11:48:19.958391 2671 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:48:19.961892 kubelet[2671]: I0319 11:48:19.961876 2671 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 11:48:19.966364 kubelet[2671]: I0319 11:48:19.966348 2671 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:48:19.966518 kubelet[2671]: I0319 11:48:19.966505 2671 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:48:19.970056 kubelet[2671]: I0319 11:48:19.970025 2671 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:48:19.970056 kubelet[2671]: I0319 11:48:19.970054 2671 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:48:19.970224 kubelet[2671]: I0319 11:48:19.970143 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:48:19.977694 kubelet[2671]: E0319 11:48:19.977594 2671 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:48:19.981182 kubelet[2671]: E0319 11:48:19.981163 2671 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.19\" not found" node="10.200.8.19" Mar 19 11:48:19.999359 kubelet[2671]: I0319 11:48:19.999340 2671 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:48:19.999359 kubelet[2671]: I0319 11:48:19.999352 2671 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:48:19.999553 kubelet[2671]: I0319 11:48:19.999371 2671 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:48:20.063889 kubelet[2671]: I0319 11:48:20.063846 2671 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.19" Mar 19 11:48:20.068200 kubelet[2671]: I0319 11:48:20.068168 2671 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.19" Mar 19 11:48:20.081937 kubelet[2671]: E0319 11:48:20.081898 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.106146 sudo[2510]: pam_unix(sudo:session): session closed for user root Mar 19 11:48:20.182481 kubelet[2671]: E0319 11:48:20.182436 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.211665 sshd[2509]: Connection closed by 10.200.16.10 port 54788 Mar 19 11:48:20.212517 sshd-session[2507]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:20.217296 systemd[1]: sshd@6-10.200.8.19:22-10.200.16.10:54788.service: Deactivated successfully. Mar 19 11:48:20.220028 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:48:20.220298 systemd[1]: session-9.scope: Consumed 436ms CPU time, 111M memory peak. Mar 19 11:48:20.222043 systemd-logind[1741]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:48:20.223184 systemd-logind[1741]: Removed session 9. Mar 19 11:48:20.282972 kubelet[2671]: E0319 11:48:20.282907 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.383524 kubelet[2671]: E0319 11:48:20.383471 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.484681 kubelet[2671]: E0319 11:48:20.484493 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.585243 kubelet[2671]: E0319 11:48:20.585180 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:20.686034 kubelet[2671]: E0319 11:48:20.685965 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:22.860146 kubelet[2671]: E0319 11:48:20.786716 2671 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Mar 19 11:48:22.860146 kubelet[2671]: I0319 11:48:20.909049 2671 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 11:48:22.860146 kubelet[2671]: W0319 11:48:20.909276 2671 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 11:48:22.860146 kubelet[2671]: W0319 11:48:20.909364 2671 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 11:48:22.860146 kubelet[2671]: W0319 11:48:20.909402 2671 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 11:48:22.860146 kubelet[2671]: I0319 11:48:20.939584 2671 apiserver.go:52] "Watching apiserver" Mar 19 11:48:22.860146 kubelet[2671]: E0319 11:48:20.939600 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:22.860146 kubelet[2671]: E0319 11:48:21.940626 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:22.870666 kubelet[2671]: I0319 11:48:22.870622 2671 policy_none.go:49] "None policy: Start" Mar 19 11:48:22.871713 kubelet[2671]: I0319 11:48:22.871686 2671 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:48:22.871713 kubelet[2671]: I0319 11:48:22.871718 2671 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:48:22.887967 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:48:22.893934 kubelet[2671]: I0319 11:48:22.893880 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:48:22.897792 kubelet[2671]: I0319 11:48:22.897384 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:48:22.897792 kubelet[2671]: I0319 11:48:22.897421 2671 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:48:22.897792 kubelet[2671]: I0319 11:48:22.897443 2671 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 11:48:22.897792 kubelet[2671]: E0319 11:48:22.897493 2671 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:48:22.905746 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:48:22.915283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:48:22.916713 kubelet[2671]: I0319 11:48:22.916691 2671 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:48:22.917206 kubelet[2671]: I0319 11:48:22.917122 2671 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:48:22.918809 kubelet[2671]: I0319 11:48:22.918509 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:48:22.918974 kubelet[2671]: I0319 11:48:22.918894 2671 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 19 11:48:22.919228 containerd[1759]: time="2025-03-19T11:48:22.919188362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:48:22.919927 kubelet[2671]: I0319 11:48:22.919399 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 19 11:48:22.941702 kubelet[2671]: E0319 11:48:22.941645 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:22.998411 kubelet[2671]: I0319 11:48:22.998310 2671 topology_manager.go:215] "Topology Admit Handler" podUID="ae347984-d155-4fe9-a900-c91df839992b" podNamespace="calico-system" podName="calico-node-rgfrj" Mar 19 11:48:22.998897 kubelet[2671]: I0319 11:48:22.998538 2671 topology_manager.go:215] "Topology Admit Handler" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" podNamespace="calico-system" podName="csi-node-driver-z4bfg" Mar 19 11:48:22.998897 kubelet[2671]: I0319 11:48:22.998654 2671 topology_manager.go:215] "Topology Admit Handler" podUID="67cd557f-0039-4fbf-9094-22bac64f4066" podNamespace="kube-system" podName="kube-proxy-86pkz" Mar 19 11:48:22.999344 kubelet[2671]: E0319 11:48:22.999127 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:23.007609 systemd[1]: Created slice kubepods-besteffort-podae347984_d155_4fe9_a900_c91df839992b.slice - libcontainer container kubepods-besteffort-podae347984_d155_4fe9_a900_c91df839992b.slice. Mar 19 11:48:23.029495 systemd[1]: Created slice kubepods-besteffort-pod67cd557f_0039_4fbf_9094_22bac64f4066.slice - libcontainer container kubepods-besteffort-pod67cd557f_0039_4fbf_9094_22bac64f4066.slice. Mar 19 11:48:23.067740 kubelet[2671]: I0319 11:48:23.067694 2671 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:48:23.086797 kubelet[2671]: I0319 11:48:23.086728 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae347984-d155-4fe9-a900-c91df839992b-tigera-ca-bundle\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.086997 kubelet[2671]: I0319 11:48:23.086816 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6bdfd40-fbd5-495c-8be7-71a16b2a295b-registration-dir\") pod \"csi-node-driver-z4bfg\" (UID: \"c6bdfd40-fbd5-495c-8be7-71a16b2a295b\") " pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:23.086997 kubelet[2671]: I0319 11:48:23.086850 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-xtables-lock\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.086997 kubelet[2671]: I0319 11:48:23.086875 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-policysync\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.086997 kubelet[2671]: I0319 11:48:23.086900 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-var-lib-calico\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.086997 kubelet[2671]: I0319 11:48:23.086926 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-cni-log-dir\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087261 kubelet[2671]: I0319 11:48:23.086954 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-flexvol-driver-host\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087261 kubelet[2671]: I0319 11:48:23.086982 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8dpp\" (UniqueName: \"kubernetes.io/projected/ae347984-d155-4fe9-a900-c91df839992b-kube-api-access-l8dpp\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087261 kubelet[2671]: I0319 11:48:23.087009 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67cd557f-0039-4fbf-9094-22bac64f4066-lib-modules\") pod \"kube-proxy-86pkz\" (UID: \"67cd557f-0039-4fbf-9094-22bac64f4066\") " pod="kube-system/kube-proxy-86pkz" Mar 19 11:48:23.087261 kubelet[2671]: I0319 11:48:23.087051 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-lib-modules\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087261 kubelet[2671]: I0319 11:48:23.087082 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfs6s\" (UniqueName: \"kubernetes.io/projected/67cd557f-0039-4fbf-9094-22bac64f4066-kube-api-access-qfs6s\") pod \"kube-proxy-86pkz\" (UID: \"67cd557f-0039-4fbf-9094-22bac64f4066\") " pod="kube-system/kube-proxy-86pkz" Mar 19 11:48:23.087502 kubelet[2671]: I0319 11:48:23.087111 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-var-run-calico\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087502 kubelet[2671]: I0319 11:48:23.087139 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6bdfd40-fbd5-495c-8be7-71a16b2a295b-kubelet-dir\") pod \"csi-node-driver-z4bfg\" (UID: \"c6bdfd40-fbd5-495c-8be7-71a16b2a295b\") " pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:23.087502 kubelet[2671]: I0319 11:48:23.087168 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6bdfd40-fbd5-495c-8be7-71a16b2a295b-socket-dir\") pod \"csi-node-driver-z4bfg\" (UID: \"c6bdfd40-fbd5-495c-8be7-71a16b2a295b\") " pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:23.087502 kubelet[2671]: I0319 11:48:23.087196 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67cd557f-0039-4fbf-9094-22bac64f4066-kube-proxy\") pod \"kube-proxy-86pkz\" (UID: \"67cd557f-0039-4fbf-9094-22bac64f4066\") " pod="kube-system/kube-proxy-86pkz" Mar 19 11:48:23.087502 kubelet[2671]: I0319 11:48:23.087223 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67cd557f-0039-4fbf-9094-22bac64f4066-xtables-lock\") pod \"kube-proxy-86pkz\" (UID: \"67cd557f-0039-4fbf-9094-22bac64f4066\") " pod="kube-system/kube-proxy-86pkz" Mar 19 11:48:23.087722 kubelet[2671]: I0319 11:48:23.087250 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae347984-d155-4fe9-a900-c91df839992b-node-certs\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087722 kubelet[2671]: I0319 11:48:23.087281 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-cni-bin-dir\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087722 kubelet[2671]: I0319 11:48:23.087309 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae347984-d155-4fe9-a900-c91df839992b-cni-net-dir\") pod \"calico-node-rgfrj\" (UID: \"ae347984-d155-4fe9-a900-c91df839992b\") " pod="calico-system/calico-node-rgfrj" Mar 19 11:48:23.087722 kubelet[2671]: I0319 11:48:23.087358 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6bdfd40-fbd5-495c-8be7-71a16b2a295b-varrun\") pod \"csi-node-driver-z4bfg\" (UID: \"c6bdfd40-fbd5-495c-8be7-71a16b2a295b\") " pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:23.087722 kubelet[2671]: I0319 11:48:23.087395 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx6rp\" (UniqueName: \"kubernetes.io/projected/c6bdfd40-fbd5-495c-8be7-71a16b2a295b-kube-api-access-zx6rp\") pod \"csi-node-driver-z4bfg\" (UID: \"c6bdfd40-fbd5-495c-8be7-71a16b2a295b\") " pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:23.191157 kubelet[2671]: E0319 11:48:23.190686 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.191157 kubelet[2671]: W0319 11:48:23.190714 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.191157 kubelet[2671]: E0319 11:48:23.190746 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.191880 kubelet[2671]: E0319 11:48:23.191731 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.191880 kubelet[2671]: W0319 11:48:23.191755 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.191880 kubelet[2671]: E0319 11:48:23.191810 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.192438 kubelet[2671]: E0319 11:48:23.192331 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.192438 kubelet[2671]: W0319 11:48:23.192348 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.192789 kubelet[2671]: E0319 11:48:23.192604 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.193008 kubelet[2671]: E0319 11:48:23.192991 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.193202 kubelet[2671]: W0319 11:48:23.193106 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.193202 kubelet[2671]: E0319 11:48:23.193151 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.193706 kubelet[2671]: E0319 11:48:23.193602 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.193706 kubelet[2671]: W0319 11:48:23.193619 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.193706 kubelet[2671]: E0319 11:48:23.193650 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.194250 kubelet[2671]: E0319 11:48:23.194155 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.194250 kubelet[2671]: W0319 11:48:23.194169 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.194485 kubelet[2671]: E0319 11:48:23.194379 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.194680 kubelet[2671]: E0319 11:48:23.194667 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.194873 kubelet[2671]: W0319 11:48:23.194758 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.194873 kubelet[2671]: E0319 11:48:23.194818 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.195224 kubelet[2671]: E0319 11:48:23.195173 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.195224 kubelet[2671]: W0319 11:48:23.195186 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.195224 kubelet[2671]: E0319 11:48:23.195199 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.201738 kubelet[2671]: E0319 11:48:23.201286 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.201738 kubelet[2671]: W0319 11:48:23.201301 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.201738 kubelet[2671]: E0319 11:48:23.201317 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.202917 kubelet[2671]: E0319 11:48:23.202896 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.202917 kubelet[2671]: W0319 11:48:23.202915 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.203034 kubelet[2671]: E0319 11:48:23.202944 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.204918 kubelet[2671]: E0319 11:48:23.204898 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.204994 kubelet[2671]: W0319 11:48:23.204918 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.204994 kubelet[2671]: E0319 11:48:23.204949 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.205965 kubelet[2671]: E0319 11:48:23.205950 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:23.206052 kubelet[2671]: W0319 11:48:23.206039 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:23.206151 kubelet[2671]: E0319 11:48:23.206123 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:23.328977 containerd[1759]: time="2025-03-19T11:48:23.328919985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rgfrj,Uid:ae347984-d155-4fe9-a900-c91df839992b,Namespace:calico-system,Attempt:0,}" Mar 19 11:48:23.554348 containerd[1759]: time="2025-03-19T11:48:23.554183735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86pkz,Uid:67cd557f-0039-4fbf-9094-22bac64f4066,Namespace:kube-system,Attempt:0,}" Mar 19 11:48:23.942746 kubelet[2671]: E0319 11:48:23.942686 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:24.898904 kubelet[2671]: E0319 11:48:24.898419 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:24.943816 kubelet[2671]: E0319 11:48:24.943735 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:25.422186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699169468.mount: Deactivated successfully. Mar 19 11:48:25.658029 containerd[1759]: time="2025-03-19T11:48:25.657953133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:48:25.750953 containerd[1759]: time="2025-03-19T11:48:25.750398744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 19 11:48:25.753040 containerd[1759]: time="2025-03-19T11:48:25.752994983Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:48:25.817620 containerd[1759]: time="2025-03-19T11:48:25.817558668Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:48:25.819863 containerd[1759]: time="2025-03-19T11:48:25.819798803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:48:25.862937 containerd[1759]: time="2025-03-19T11:48:25.862820859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:48:25.864811 containerd[1759]: time="2025-03-19T11:48:25.864064078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.53495009s" Mar 19 11:48:25.909249 containerd[1759]: time="2025-03-19T11:48:25.909185367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.354873629s" Mar 19 11:48:25.944249 kubelet[2671]: E0319 11:48:25.944177 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:26.568613 containerd[1759]: time="2025-03-19T11:48:26.567032905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:48:26.568613 containerd[1759]: time="2025-03-19T11:48:26.568562928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:48:26.568613 containerd[1759]: time="2025-03-19T11:48:26.568579729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:48:26.570404 containerd[1759]: time="2025-03-19T11:48:26.569530543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:48:26.570404 containerd[1759]: time="2025-03-19T11:48:26.570176753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:48:26.570404 containerd[1759]: time="2025-03-19T11:48:26.570194153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:48:26.570404 containerd[1759]: time="2025-03-19T11:48:26.570275855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:48:26.570404 containerd[1759]: time="2025-03-19T11:48:26.570357056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:48:26.781952 systemd[1]: Started cri-containerd-b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d.scope - libcontainer container b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d. Mar 19 11:48:26.783559 systemd[1]: Started cri-containerd-e70e49deb6581029e72a229c4f004a59de1db9ab243d3ee7a5e5255eef8668a9.scope - libcontainer container e70e49deb6581029e72a229c4f004a59de1db9ab243d3ee7a5e5255eef8668a9. Mar 19 11:48:26.824520 containerd[1759]: time="2025-03-19T11:48:26.822057297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86pkz,Uid:67cd557f-0039-4fbf-9094-22bac64f4066,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70e49deb6581029e72a229c4f004a59de1db9ab243d3ee7a5e5255eef8668a9\"" Mar 19 11:48:26.829965 containerd[1759]: time="2025-03-19T11:48:26.829924417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rgfrj,Uid:ae347984-d155-4fe9-a900-c91df839992b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\"" Mar 19 11:48:26.830383 containerd[1759]: time="2025-03-19T11:48:26.829964517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 19 11:48:26.898287 kubelet[2671]: E0319 11:48:26.898231 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:26.945265 kubelet[2671]: E0319 11:48:26.945193 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:27.946127 kubelet[2671]: E0319 11:48:27.946066 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:28.414716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713430279.mount: Deactivated successfully. Mar 19 11:48:28.900296 kubelet[2671]: E0319 11:48:28.898921 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:28.946877 kubelet[2671]: E0319 11:48:28.946831 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:29.160435 containerd[1759]: time="2025-03-19T11:48:29.160053774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:29.162304 containerd[1759]: time="2025-03-19T11:48:29.162232207Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 19 11:48:29.165922 containerd[1759]: time="2025-03-19T11:48:29.165859762Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:29.170053 containerd[1759]: time="2025-03-19T11:48:29.169999026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:29.171081 containerd[1759]: time="2025-03-19T11:48:29.170585134Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.340307012s" Mar 19 11:48:29.171081 containerd[1759]: time="2025-03-19T11:48:29.170627735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 19 11:48:29.172756 containerd[1759]: time="2025-03-19T11:48:29.172722167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 19 11:48:29.173870 containerd[1759]: time="2025-03-19T11:48:29.173837084Z" level=info msg="CreateContainer within sandbox \"e70e49deb6581029e72a229c4f004a59de1db9ab243d3ee7a5e5255eef8668a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:48:29.220722 containerd[1759]: time="2025-03-19T11:48:29.220669599Z" level=info msg="CreateContainer within sandbox \"e70e49deb6581029e72a229c4f004a59de1db9ab243d3ee7a5e5255eef8668a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16a646b03ce41a2462a3317b671a2d857dbdc8aa01c1f723b9bc1de09b552c09\"" Mar 19 11:48:29.221791 containerd[1759]: time="2025-03-19T11:48:29.221726515Z" level=info msg="StartContainer for \"16a646b03ce41a2462a3317b671a2d857dbdc8aa01c1f723b9bc1de09b552c09\"" Mar 19 11:48:29.258973 systemd[1]: Started cri-containerd-16a646b03ce41a2462a3317b671a2d857dbdc8aa01c1f723b9bc1de09b552c09.scope - libcontainer container 16a646b03ce41a2462a3317b671a2d857dbdc8aa01c1f723b9bc1de09b552c09. Mar 19 11:48:29.298417 containerd[1759]: time="2025-03-19T11:48:29.298011379Z" level=info msg="StartContainer for \"16a646b03ce41a2462a3317b671a2d857dbdc8aa01c1f723b9bc1de09b552c09\" returns successfully" Mar 19 11:48:29.914735 kubelet[2671]: E0319 11:48:29.914696 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.915142 kubelet[2671]: W0319 11:48:29.914963 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.915142 kubelet[2671]: E0319 11:48:29.914996 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.915492 kubelet[2671]: E0319 11:48:29.915286 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.915492 kubelet[2671]: W0319 11:48:29.915295 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.915492 kubelet[2671]: E0319 11:48:29.915306 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.915629 kubelet[2671]: E0319 11:48:29.915612 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.915629 kubelet[2671]: W0319 11:48:29.915625 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.915730 kubelet[2671]: E0319 11:48:29.915638 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.915915 kubelet[2671]: E0319 11:48:29.915894 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.915915 kubelet[2671]: W0319 11:48:29.915911 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.916099 kubelet[2671]: E0319 11:48:29.915925 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.916205 kubelet[2671]: E0319 11:48:29.916161 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.916205 kubelet[2671]: W0319 11:48:29.916172 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.916205 kubelet[2671]: E0319 11:48:29.916186 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.916429 kubelet[2671]: E0319 11:48:29.916369 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.916429 kubelet[2671]: W0319 11:48:29.916379 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.916429 kubelet[2671]: E0319 11:48:29.916390 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.916610 kubelet[2671]: E0319 11:48:29.916574 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.916610 kubelet[2671]: W0319 11:48:29.916585 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.916610 kubelet[2671]: E0319 11:48:29.916597 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.916796 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.917548 kubelet[2671]: W0319 11:48:29.916805 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.916814 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.917006 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.917548 kubelet[2671]: W0319 11:48:29.917016 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.917027 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.917186 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.917548 kubelet[2671]: W0319 11:48:29.917195 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.917204 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.917548 kubelet[2671]: E0319 11:48:29.917365 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.917847 kubelet[2671]: W0319 11:48:29.917373 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.917847 kubelet[2671]: E0319 11:48:29.917380 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.917847 kubelet[2671]: E0319 11:48:29.917833 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.917847 kubelet[2671]: W0319 11:48:29.917845 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.918075 kubelet[2671]: E0319 11:48:29.917859 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.918143 kubelet[2671]: E0319 11:48:29.918075 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.918143 kubelet[2671]: W0319 11:48:29.918085 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.918143 kubelet[2671]: E0319 11:48:29.918099 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.918330 kubelet[2671]: E0319 11:48:29.918292 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.918330 kubelet[2671]: W0319 11:48:29.918303 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.918330 kubelet[2671]: E0319 11:48:29.918314 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.918511 kubelet[2671]: E0319 11:48:29.918499 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.918511 kubelet[2671]: W0319 11:48:29.918508 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.918615 kubelet[2671]: E0319 11:48:29.918519 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.918871 kubelet[2671]: E0319 11:48:29.918855 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.918871 kubelet[2671]: W0319 11:48:29.918885 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.918871 kubelet[2671]: E0319 11:48:29.918899 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.919178 kubelet[2671]: E0319 11:48:29.919161 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.919178 kubelet[2671]: W0319 11:48:29.919175 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.919277 kubelet[2671]: E0319 11:48:29.919188 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.919435 kubelet[2671]: E0319 11:48:29.919419 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.919435 kubelet[2671]: W0319 11:48:29.919431 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.919555 kubelet[2671]: E0319 11:48:29.919445 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.919646 kubelet[2671]: E0319 11:48:29.919631 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.919646 kubelet[2671]: W0319 11:48:29.919644 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.919808 kubelet[2671]: E0319 11:48:29.919656 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.919894 kubelet[2671]: E0319 11:48:29.919870 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.919944 kubelet[2671]: W0319 11:48:29.919892 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.919944 kubelet[2671]: E0319 11:48:29.919907 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.924063 kubelet[2671]: I0319 11:48:29.924014 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86pkz" podStartSLOduration=7.5800499630000004 podStartE2EDuration="9.924001031s" podCreationTimestamp="2025-03-19 11:48:20 +0000 UTC" firstStartedPulling="2025-03-19 11:48:26.827927786 +0000 UTC m=+7.257607085" lastFinishedPulling="2025-03-19 11:48:29.171878854 +0000 UTC m=+9.601558153" observedRunningTime="2025-03-19 11:48:29.92389163 +0000 UTC m=+10.353570829" watchObservedRunningTime="2025-03-19 11:48:29.924001031 +0000 UTC m=+10.353680230" Mar 19 11:48:29.938348 kubelet[2671]: E0319 11:48:29.938326 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.938348 kubelet[2671]: W0319 11:48:29.938341 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.938522 kubelet[2671]: E0319 11:48:29.938354 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.938635 kubelet[2671]: E0319 11:48:29.938621 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.938693 kubelet[2671]: W0319 11:48:29.938635 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.938693 kubelet[2671]: E0319 11:48:29.938663 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.938939 kubelet[2671]: E0319 11:48:29.938921 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.938939 kubelet[2671]: W0319 11:48:29.938934 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.939058 kubelet[2671]: E0319 11:48:29.938961 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.939189 kubelet[2671]: E0319 11:48:29.939173 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.939189 kubelet[2671]: W0319 11:48:29.939185 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.939296 kubelet[2671]: E0319 11:48:29.939213 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.939433 kubelet[2671]: E0319 11:48:29.939417 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.939433 kubelet[2671]: W0319 11:48:29.939430 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.939541 kubelet[2671]: E0319 11:48:29.939455 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.939706 kubelet[2671]: E0319 11:48:29.939689 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.939706 kubelet[2671]: W0319 11:48:29.939702 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.939826 kubelet[2671]: E0319 11:48:29.939785 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.940048 kubelet[2671]: E0319 11:48:29.940024 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.940048 kubelet[2671]: W0319 11:48:29.940040 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.940228 kubelet[2671]: E0319 11:48:29.940059 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.940285 kubelet[2671]: E0319 11:48:29.940265 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.940285 kubelet[2671]: W0319 11:48:29.940276 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.940424 kubelet[2671]: E0319 11:48:29.940294 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.940496 kubelet[2671]: E0319 11:48:29.940478 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.940496 kubelet[2671]: W0319 11:48:29.940494 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.940606 kubelet[2671]: E0319 11:48:29.940512 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.940745 kubelet[2671]: E0319 11:48:29.940730 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.940745 kubelet[2671]: W0319 11:48:29.940743 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.940850 kubelet[2671]: E0319 11:48:29.940759 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.941116 kubelet[2671]: E0319 11:48:29.941099 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.941116 kubelet[2671]: W0319 11:48:29.941114 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.941231 kubelet[2671]: E0319 11:48:29.941140 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.941372 kubelet[2671]: E0319 11:48:29.941355 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:29.941372 kubelet[2671]: W0319 11:48:29.941368 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:29.941453 kubelet[2671]: E0319 11:48:29.941383 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:29.947576 kubelet[2671]: E0319 11:48:29.947557 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:30.546429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288092285.mount: Deactivated successfully. Mar 19 11:48:30.899306 kubelet[2671]: E0319 11:48:30.898617 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:30.925998 kubelet[2671]: E0319 11:48:30.925962 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.925998 kubelet[2671]: W0319 11:48:30.925986 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.925998 kubelet[2671]: E0319 11:48:30.926010 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.926283 kubelet[2671]: E0319 11:48:30.926251 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.926283 kubelet[2671]: W0319 11:48:30.926262 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.926283 kubelet[2671]: E0319 11:48:30.926274 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.926486 kubelet[2671]: E0319 11:48:30.926468 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.926486 kubelet[2671]: W0319 11:48:30.926480 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.926668 kubelet[2671]: E0319 11:48:30.926492 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.926737 kubelet[2671]: E0319 11:48:30.926692 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.926737 kubelet[2671]: W0319 11:48:30.926702 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.926737 kubelet[2671]: E0319 11:48:30.926715 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.926963 kubelet[2671]: E0319 11:48:30.926950 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.926963 kubelet[2671]: W0319 11:48:30.926960 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.927071 kubelet[2671]: E0319 11:48:30.926973 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.927191 kubelet[2671]: E0319 11:48:30.927161 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.927191 kubelet[2671]: W0319 11:48:30.927173 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.927191 kubelet[2671]: E0319 11:48:30.927185 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.927383 kubelet[2671]: E0319 11:48:30.927369 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.927383 kubelet[2671]: W0319 11:48:30.927378 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.927495 kubelet[2671]: E0319 11:48:30.927390 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.927603 kubelet[2671]: E0319 11:48:30.927574 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.927603 kubelet[2671]: W0319 11:48:30.927586 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.927603 kubelet[2671]: E0319 11:48:30.927597 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.927817 kubelet[2671]: E0319 11:48:30.927800 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.927817 kubelet[2671]: W0319 11:48:30.927813 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.927946 kubelet[2671]: E0319 11:48:30.927825 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.928026 kubelet[2671]: E0319 11:48:30.928011 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.928026 kubelet[2671]: W0319 11:48:30.928024 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.928168 kubelet[2671]: E0319 11:48:30.928036 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.928248 kubelet[2671]: E0319 11:48:30.928231 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.928290 kubelet[2671]: W0319 11:48:30.928248 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.928290 kubelet[2671]: E0319 11:48:30.928261 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.928469 kubelet[2671]: E0319 11:48:30.928455 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.928469 kubelet[2671]: W0319 11:48:30.928466 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.928576 kubelet[2671]: E0319 11:48:30.928478 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.928687 kubelet[2671]: E0319 11:48:30.928673 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.928687 kubelet[2671]: W0319 11:48:30.928684 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.928827 kubelet[2671]: E0319 11:48:30.928697 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.928908 kubelet[2671]: E0319 11:48:30.928892 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.928962 kubelet[2671]: W0319 11:48:30.928907 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.928962 kubelet[2671]: E0319 11:48:30.928920 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.929099 kubelet[2671]: E0319 11:48:30.929086 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.929099 kubelet[2671]: W0319 11:48:30.929097 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.929193 kubelet[2671]: E0319 11:48:30.929108 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.929291 kubelet[2671]: E0319 11:48:30.929277 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.929291 kubelet[2671]: W0319 11:48:30.929288 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.929401 kubelet[2671]: E0319 11:48:30.929299 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.929492 kubelet[2671]: E0319 11:48:30.929478 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.929492 kubelet[2671]: W0319 11:48:30.929490 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.929613 kubelet[2671]: E0319 11:48:30.929502 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.929690 kubelet[2671]: E0319 11:48:30.929673 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.929690 kubelet[2671]: W0319 11:48:30.929685 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.929837 kubelet[2671]: E0319 11:48:30.929697 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.929935 kubelet[2671]: E0319 11:48:30.929920 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.929935 kubelet[2671]: W0319 11:48:30.929932 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.930030 kubelet[2671]: E0319 11:48:30.929945 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.930131 kubelet[2671]: E0319 11:48:30.930118 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.930131 kubelet[2671]: W0319 11:48:30.930129 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.930211 kubelet[2671]: E0319 11:48:30.930141 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.945632 kubelet[2671]: E0319 11:48:30.945604 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.945632 kubelet[2671]: W0319 11:48:30.945625 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.945785 kubelet[2671]: E0319 11:48:30.945640 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.945953 kubelet[2671]: E0319 11:48:30.945935 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.945953 kubelet[2671]: W0319 11:48:30.945948 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.946070 kubelet[2671]: E0319 11:48:30.945968 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.946238 kubelet[2671]: E0319 11:48:30.946220 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.946238 kubelet[2671]: W0319 11:48:30.946234 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.946347 kubelet[2671]: E0319 11:48:30.946263 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.946505 kubelet[2671]: E0319 11:48:30.946488 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.946505 kubelet[2671]: W0319 11:48:30.946501 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.946626 kubelet[2671]: E0319 11:48:30.946518 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.946737 kubelet[2671]: E0319 11:48:30.946723 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.946737 kubelet[2671]: W0319 11:48:30.946734 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.946850 kubelet[2671]: E0319 11:48:30.946756 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.947018 kubelet[2671]: E0319 11:48:30.947000 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.947018 kubelet[2671]: W0319 11:48:30.947013 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.947198 kubelet[2671]: E0319 11:48:30.947031 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.947285 kubelet[2671]: E0319 11:48:30.947268 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.947285 kubelet[2671]: W0319 11:48:30.947281 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.947384 kubelet[2671]: E0319 11:48:30.947307 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.947523 kubelet[2671]: E0319 11:48:30.947507 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.947523 kubelet[2671]: W0319 11:48:30.947521 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.948219 kubelet[2671]: E0319 11:48:30.947539 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.948219 kubelet[2671]: E0319 11:48:30.947677 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:30.948219 kubelet[2671]: E0319 11:48:30.947869 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.948219 kubelet[2671]: W0319 11:48:30.947881 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.948219 kubelet[2671]: E0319 11:48:30.947903 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.948431 kubelet[2671]: E0319 11:48:30.948222 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.948431 kubelet[2671]: W0319 11:48:30.948232 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.948431 kubelet[2671]: E0319 11:48:30.948260 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.948641 kubelet[2671]: E0319 11:48:30.948626 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.948641 kubelet[2671]: W0319 11:48:30.948639 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.948742 kubelet[2671]: E0319 11:48:30.948665 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:30.948914 kubelet[2671]: E0319 11:48:30.948897 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:48:30.948914 kubelet[2671]: W0319 11:48:30.948910 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:48:30.948998 kubelet[2671]: E0319 11:48:30.948924 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:48:31.053378 containerd[1759]: time="2025-03-19T11:48:31.053301264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:31.056532 containerd[1759]: time="2025-03-19T11:48:31.056446312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 19 11:48:31.101815 containerd[1759]: time="2025-03-19T11:48:31.101690403Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:31.152513 containerd[1759]: time="2025-03-19T11:48:31.151986470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:31.155300 containerd[1759]: time="2025-03-19T11:48:31.153137588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.98037562s" Mar 19 11:48:31.155300 containerd[1759]: time="2025-03-19T11:48:31.153188688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 19 11:48:31.156525 containerd[1759]: time="2025-03-19T11:48:31.156490239Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 19 11:48:31.814077 containerd[1759]: time="2025-03-19T11:48:31.814012572Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717\"" Mar 19 11:48:31.814945 containerd[1759]: time="2025-03-19T11:48:31.814904286Z" level=info msg="StartContainer for \"3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717\"" Mar 19 11:48:31.856133 systemd[1]: run-containerd-runc-k8s.io-3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717-runc.pCJZr6.mount: Deactivated successfully. Mar 19 11:48:31.864959 systemd[1]: Started cri-containerd-3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717.scope - libcontainer container 3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717. Mar 19 11:48:31.904860 containerd[1759]: time="2025-03-19T11:48:31.904803258Z" level=info msg="StartContainer for \"3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717\" returns successfully" Mar 19 11:48:31.911805 systemd[1]: cri-containerd-3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717.scope: Deactivated successfully. Mar 19 11:48:31.948234 kubelet[2671]: E0319 11:48:31.948195 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:32.661629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717-rootfs.mount: Deactivated successfully. Mar 19 11:48:32.899299 kubelet[2671]: E0319 11:48:32.898722 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:32.948334 kubelet[2671]: E0319 11:48:32.948293 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:33.948953 kubelet[2671]: E0319 11:48:33.948902 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:34.899212 kubelet[2671]: E0319 11:48:34.897954 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:34.950040 kubelet[2671]: E0319 11:48:34.949978 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:35.950941 kubelet[2671]: E0319 11:48:35.950882 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:36.899213 kubelet[2671]: E0319 11:48:36.898793 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:36.951568 kubelet[2671]: E0319 11:48:36.951489 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:37.952707 kubelet[2671]: E0319 11:48:37.952658 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:38.111986 containerd[1759]: time="2025-03-19T11:48:38.111908478Z" level=info msg="shim disconnected" id=3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717 namespace=k8s.io Mar 19 11:48:38.111986 containerd[1759]: time="2025-03-19T11:48:38.111980579Z" level=warning msg="cleaning up after shim disconnected" id=3694664635b729f3d4f1ac873968ec3f18a3e9440a0bdd1e94009b87d4d58717 namespace=k8s.io Mar 19 11:48:38.111986 containerd[1759]: time="2025-03-19T11:48:38.111993979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:48:38.898851 kubelet[2671]: E0319 11:48:38.898274 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:38.936391 containerd[1759]: time="2025-03-19T11:48:38.936313567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 19 11:48:38.952925 kubelet[2671]: E0319 11:48:38.952879 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:39.939367 kubelet[2671]: E0319 11:48:39.939328 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:39.953085 kubelet[2671]: E0319 11:48:39.952988 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:40.900482 kubelet[2671]: E0319 11:48:40.899916 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:40.953176 kubelet[2671]: E0319 11:48:40.953114 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:41.953500 kubelet[2671]: E0319 11:48:41.953431 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:42.899280 kubelet[2671]: E0319 11:48:42.898672 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:42.954543 kubelet[2671]: E0319 11:48:42.954487 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:43.955616 kubelet[2671]: E0319 11:48:43.955558 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:44.899229 kubelet[2671]: E0319 11:48:44.899175 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:44.957207 kubelet[2671]: E0319 11:48:44.956963 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:45.562568 containerd[1759]: time="2025-03-19T11:48:45.562503739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:45.564933 containerd[1759]: time="2025-03-19T11:48:45.564783979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 19 11:48:45.567791 containerd[1759]: time="2025-03-19T11:48:45.567422926Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:45.571233 containerd[1759]: time="2025-03-19T11:48:45.571188191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:48:45.572307 containerd[1759]: time="2025-03-19T11:48:45.571832103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 6.635473936s" Mar 19 11:48:45.572307 containerd[1759]: time="2025-03-19T11:48:45.571873203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 19 11:48:45.574538 containerd[1759]: time="2025-03-19T11:48:45.574508850Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:48:45.612074 containerd[1759]: time="2025-03-19T11:48:45.612023607Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017\"" Mar 19 11:48:45.612974 containerd[1759]: time="2025-03-19T11:48:45.612937623Z" level=info msg="StartContainer for \"395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017\"" Mar 19 11:48:45.648931 systemd[1]: Started cri-containerd-395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017.scope - libcontainer container 395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017. Mar 19 11:48:45.683869 containerd[1759]: time="2025-03-19T11:48:45.683817964Z" level=info msg="StartContainer for \"395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017\" returns successfully" Mar 19 11:48:45.957129 kubelet[2671]: E0319 11:48:45.957084 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:46.898829 kubelet[2671]: E0319 11:48:46.898285 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:46.957865 kubelet[2671]: E0319 11:48:46.957814 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:47.958069 kubelet[2671]: E0319 11:48:47.958026 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:48.898520 kubelet[2671]: E0319 11:48:48.897853 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:48.959170 kubelet[2671]: E0319 11:48:48.959120 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:49.959424 kubelet[2671]: E0319 11:48:49.959374 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:50.878190 systemd[1]: cri-containerd-395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017.scope: Deactivated successfully. Mar 19 11:48:50.878639 systemd[1]: cri-containerd-395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017.scope: Consumed 499ms CPU time, 175.1M memory peak, 154M written to disk. Mar 19 11:48:50.894426 kubelet[2671]: I0319 11:48:50.893822 2671 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 19 11:48:50.917074 systemd[1]: Created slice kubepods-besteffort-podc6bdfd40_fbd5_495c_8be7_71a16b2a295b.slice - libcontainer container kubepods-besteffort-podc6bdfd40_fbd5_495c_8be7_71a16b2a295b.slice. Mar 19 11:48:50.924274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017-rootfs.mount: Deactivated successfully. Mar 19 11:48:50.928011 containerd[1759]: time="2025-03-19T11:48:50.927824539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:0,}" Mar 19 11:48:50.959802 kubelet[2671]: E0319 11:48:50.959729 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:51.960843 kubelet[2671]: E0319 11:48:51.960787 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:52.573536 containerd[1759]: time="2025-03-19T11:48:52.573377902Z" level=info msg="shim disconnected" id=395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017 namespace=k8s.io Mar 19 11:48:52.573536 containerd[1759]: time="2025-03-19T11:48:52.573529604Z" level=warning msg="cleaning up after shim disconnected" id=395aec6f5511a038dad19250cd0f10733239822abbf1b27ac4d0ba507bc3b017 namespace=k8s.io Mar 19 11:48:52.573536 containerd[1759]: time="2025-03-19T11:48:52.573548005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:48:52.602882 containerd[1759]: time="2025-03-19T11:48:52.602818895Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:48:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:48:52.666271 containerd[1759]: time="2025-03-19T11:48:52.666215157Z" level=error msg="Failed to destroy network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:52.669930 containerd[1759]: time="2025-03-19T11:48:52.667148572Z" level=error msg="encountered an error cleaning up failed sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:52.669930 containerd[1759]: time="2025-03-19T11:48:52.667241674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:52.670043 kubelet[2671]: E0319 11:48:52.669153 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:52.670043 kubelet[2671]: E0319 11:48:52.669230 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:52.670043 kubelet[2671]: E0319 11:48:52.669259 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:52.668749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336-shm.mount: Deactivated successfully. Mar 19 11:48:52.670689 kubelet[2671]: E0319 11:48:52.669318 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:52.961061 kubelet[2671]: E0319 11:48:52.961010 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:52.963968 containerd[1759]: time="2025-03-19T11:48:52.963890743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 19 11:48:52.964539 kubelet[2671]: I0319 11:48:52.964508 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336" Mar 19 11:48:52.965246 containerd[1759]: time="2025-03-19T11:48:52.965196465Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:48:52.965496 containerd[1759]: time="2025-03-19T11:48:52.965466969Z" level=info msg="Ensure that sandbox f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336 in task-service has been cleanup successfully" Mar 19 11:48:52.966820 containerd[1759]: time="2025-03-19T11:48:52.965904977Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:48:52.966820 containerd[1759]: time="2025-03-19T11:48:52.965936177Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:48:52.969742 containerd[1759]: time="2025-03-19T11:48:52.969478736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:1,}" Mar 19 11:48:52.970148 systemd[1]: run-netns-cni\x2d635a1e94\x2d0dbd\x2d9ecc\x2d67ac\x2d1669e2bea545.mount: Deactivated successfully. Mar 19 11:48:53.057691 containerd[1759]: time="2025-03-19T11:48:53.057626313Z" level=error msg="Failed to destroy network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:53.058071 containerd[1759]: time="2025-03-19T11:48:53.058022820Z" level=error msg="encountered an error cleaning up failed sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:53.058164 containerd[1759]: time="2025-03-19T11:48:53.058127821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:53.058425 kubelet[2671]: E0319 11:48:53.058391 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:53.058514 kubelet[2671]: E0319 11:48:53.058450 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:53.058514 kubelet[2671]: E0319 11:48:53.058478 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:53.058595 kubelet[2671]: E0319 11:48:53.058540 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:53.583563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc-shm.mount: Deactivated successfully. Mar 19 11:48:53.961795 kubelet[2671]: E0319 11:48:53.961692 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:53.967607 kubelet[2671]: I0319 11:48:53.967574 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc" Mar 19 11:48:53.968662 containerd[1759]: time="2025-03-19T11:48:53.968187265Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:48:53.968662 containerd[1759]: time="2025-03-19T11:48:53.968467869Z" level=info msg="Ensure that sandbox f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc in task-service has been cleanup successfully" Mar 19 11:48:53.969862 containerd[1759]: time="2025-03-19T11:48:53.969367885Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:48:53.969862 containerd[1759]: time="2025-03-19T11:48:53.969442086Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:48:53.970432 containerd[1759]: time="2025-03-19T11:48:53.969974595Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:48:53.970432 containerd[1759]: time="2025-03-19T11:48:53.970376501Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:48:53.970432 containerd[1759]: time="2025-03-19T11:48:53.970405202Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:48:53.971306 containerd[1759]: time="2025-03-19T11:48:53.971128414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:2,}" Mar 19 11:48:53.972737 systemd[1]: run-netns-cni\x2d815eab7d\x2d0f4f\x2db814\x2dad00\x2d0b88954f3812.mount: Deactivated successfully. Mar 19 11:48:54.050955 containerd[1759]: time="2025-03-19T11:48:54.050895950Z" level=error msg="Failed to destroy network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.051881 containerd[1759]: time="2025-03-19T11:48:54.051823066Z" level=error msg="encountered an error cleaning up failed sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.052015 containerd[1759]: time="2025-03-19T11:48:54.051915467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.053852 kubelet[2671]: E0319 11:48:54.052312 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.053852 kubelet[2671]: E0319 11:48:54.052380 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:54.053852 kubelet[2671]: E0319 11:48:54.052408 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:48:54.053968 kubelet[2671]: E0319 11:48:54.052456 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:48:54.054734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd-shm.mount: Deactivated successfully. Mar 19 11:48:54.169086 kubelet[2671]: I0319 11:48:54.168976 2671 topology_manager.go:215] "Topology Admit Handler" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" podNamespace="default" podName="nginx-deployment-85f456d6dd-kzddm" Mar 19 11:48:54.185530 systemd[1]: Created slice kubepods-besteffort-pod8742a208_721b_4b01_b5fc_fcfe2d1cf6fa.slice - libcontainer container kubepods-besteffort-pod8742a208_721b_4b01_b5fc_fcfe2d1cf6fa.slice. Mar 19 11:48:54.358623 kubelet[2671]: I0319 11:48:54.357162 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dsr\" (UniqueName: \"kubernetes.io/projected/8742a208-721b-4b01-b5fc-fcfe2d1cf6fa-kube-api-access-79dsr\") pod \"nginx-deployment-85f456d6dd-kzddm\" (UID: \"8742a208-721b-4b01-b5fc-fcfe2d1cf6fa\") " pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:48:54.493122 containerd[1759]: time="2025-03-19T11:48:54.493077857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:0,}" Mar 19 11:48:54.638712 containerd[1759]: time="2025-03-19T11:48:54.637712379Z" level=error msg="Failed to destroy network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.640331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6-shm.mount: Deactivated successfully. Mar 19 11:48:54.642605 containerd[1759]: time="2025-03-19T11:48:54.642493759Z" level=error msg="encountered an error cleaning up failed sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.642720 containerd[1759]: time="2025-03-19T11:48:54.642600661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.643635 kubelet[2671]: E0319 11:48:54.642890 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:48:54.643635 kubelet[2671]: E0319 11:48:54.642964 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:48:54.643635 kubelet[2671]: E0319 11:48:54.642989 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:48:54.643862 kubelet[2671]: E0319 11:48:54.643043 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kzddm" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" Mar 19 11:48:54.962164 kubelet[2671]: E0319 11:48:54.962097 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:54.972886 kubelet[2671]: I0319 11:48:54.971847 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd" Mar 19 11:48:54.973145 containerd[1759]: time="2025-03-19T11:48:54.972689890Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:48:54.973145 containerd[1759]: time="2025-03-19T11:48:54.972960495Z" level=info msg="Ensure that sandbox a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd in task-service has been cleanup successfully" Mar 19 11:48:54.975818 kubelet[2671]: I0319 11:48:54.974912 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6" Mar 19 11:48:54.975995 containerd[1759]: time="2025-03-19T11:48:54.975417936Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:48:54.975995 containerd[1759]: time="2025-03-19T11:48:54.975642940Z" level=info msg="Ensure that sandbox ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6 in task-service has been cleanup successfully" Mar 19 11:48:54.975995 containerd[1759]: time="2025-03-19T11:48:54.975827843Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:48:54.975995 containerd[1759]: time="2025-03-19T11:48:54.975845743Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:48:54.976627 systemd[1]: run-netns-cni\x2d84ae7006\x2dabfc\x2d8e08\x2d22cf\x2d14ce9542033a.mount: Deactivated successfully. Mar 19 11:48:54.977963 containerd[1759]: time="2025-03-19T11:48:54.977917478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:1,}" Mar 19 11:48:54.978336 containerd[1759]: time="2025-03-19T11:48:54.978278284Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:48:54.978336 containerd[1759]: time="2025-03-19T11:48:54.978300484Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:48:54.978822 containerd[1759]: time="2025-03-19T11:48:54.978759992Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:48:54.980127 containerd[1759]: time="2025-03-19T11:48:54.978871794Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:48:54.980127 containerd[1759]: time="2025-03-19T11:48:54.978892494Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:48:54.980127 containerd[1759]: time="2025-03-19T11:48:54.979941112Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:48:54.980127 containerd[1759]: time="2025-03-19T11:48:54.980029613Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:48:54.980127 containerd[1759]: time="2025-03-19T11:48:54.980043413Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:48:54.981853 systemd[1]: run-netns-cni\x2dfc71c79a\x2d3b76\x2d1664\x2d1a13\x2d3384df34a0a3.mount: Deactivated successfully. Mar 19 11:48:54.982437 containerd[1759]: time="2025-03-19T11:48:54.982411853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:3,}" Mar 19 11:48:55.963125 kubelet[2671]: E0319 11:48:55.962957 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:56.964058 kubelet[2671]: E0319 11:48:56.963969 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:57.964698 kubelet[2671]: E0319 11:48:57.964642 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:58.965741 kubelet[2671]: E0319 11:48:58.965649 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:59.938910 kubelet[2671]: E0319 11:48:59.938846 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:48:59.966600 kubelet[2671]: E0319 11:48:59.966528 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:00.242459 containerd[1759]: time="2025-03-19T11:49:00.242155374Z" level=error msg="Failed to destroy network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.243790 containerd[1759]: time="2025-03-19T11:49:00.243297493Z" level=error msg="encountered an error cleaning up failed sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.243790 containerd[1759]: time="2025-03-19T11:49:00.243389095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.243990 kubelet[2671]: E0319 11:49:00.243710 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.244747 kubelet[2671]: E0319 11:49:00.244262 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:00.244747 kubelet[2671]: E0319 11:49:00.244303 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:00.244747 kubelet[2671]: E0319 11:49:00.244362 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kzddm" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" Mar 19 11:49:00.247494 containerd[1759]: time="2025-03-19T11:49:00.247364860Z" level=error msg="Failed to destroy network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.248118 containerd[1759]: time="2025-03-19T11:49:00.247930069Z" level=error msg="encountered an error cleaning up failed sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.248118 containerd[1759]: time="2025-03-19T11:49:00.248008970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.248486 kubelet[2671]: E0319 11:49:00.248457 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:00.248800 kubelet[2671]: E0319 11:49:00.248710 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:00.248800 kubelet[2671]: E0319 11:49:00.248748 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:00.249198 kubelet[2671]: E0319 11:49:00.249049 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:49:00.967694 kubelet[2671]: E0319 11:49:00.967617 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:00.990573 kubelet[2671]: I0319 11:49:00.990536 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf" Mar 19 11:49:00.992183 containerd[1759]: time="2025-03-19T11:49:00.992136769Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:00.992428 containerd[1759]: time="2025-03-19T11:49:00.992400773Z" level=info msg="Ensure that sandbox 541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf in task-service has been cleanup successfully" Mar 19 11:49:00.993251 kubelet[2671]: I0319 11:49:00.993215 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5" Mar 19 11:49:00.993344 containerd[1759]: time="2025-03-19T11:49:00.993270487Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:00.993344 containerd[1759]: time="2025-03-19T11:49:00.993291488Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:00.994304 containerd[1759]: time="2025-03-19T11:49:00.994070300Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:00.994304 containerd[1759]: time="2025-03-19T11:49:00.994273104Z" level=info msg="Ensure that sandbox 74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5 in task-service has been cleanup successfully" Mar 19 11:49:00.994507 containerd[1759]: time="2025-03-19T11:49:00.994484107Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:00.994601 containerd[1759]: time="2025-03-19T11:49:00.994578209Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:00.994601 containerd[1759]: time="2025-03-19T11:49:00.994596309Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:00.995207 containerd[1759]: time="2025-03-19T11:49:00.994889514Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:00.995207 containerd[1759]: time="2025-03-19T11:49:00.994906514Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:00.995207 containerd[1759]: time="2025-03-19T11:49:00.995057717Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:00.995207 containerd[1759]: time="2025-03-19T11:49:00.995140918Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:00.995207 containerd[1759]: time="2025-03-19T11:49:00.995152818Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:00.995908 containerd[1759]: time="2025-03-19T11:49:00.995711927Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:00.996046 containerd[1759]: time="2025-03-19T11:49:00.995935831Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:00.996046 containerd[1759]: time="2025-03-19T11:49:00.995953631Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:00.996141 containerd[1759]: time="2025-03-19T11:49:00.996043933Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:00.996141 containerd[1759]: time="2025-03-19T11:49:00.996120934Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:00.996141 containerd[1759]: time="2025-03-19T11:49:00.996133434Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:00.996751 containerd[1759]: time="2025-03-19T11:49:00.996630642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:4,}" Mar 19 11:49:00.997387 containerd[1759]: time="2025-03-19T11:49:00.997353754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:2,}" Mar 19 11:49:01.025509 systemd[1]: run-netns-cni\x2d0cbfedf1\x2d24cf\x2dec9f\x2d78a7\x2db4559c581a06.mount: Deactivated successfully. Mar 19 11:49:01.025651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf-shm.mount: Deactivated successfully. Mar 19 11:49:01.025743 systemd[1]: run-netns-cni\x2d0aabe057\x2d52c1\x2da1c2\x2dffdb\x2dddef32372ac1.mount: Deactivated successfully. Mar 19 11:49:01.025835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5-shm.mount: Deactivated successfully. Mar 19 11:49:01.208829 containerd[1759]: time="2025-03-19T11:49:01.208296012Z" level=error msg="Failed to destroy network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.208829 containerd[1759]: time="2025-03-19T11:49:01.208803220Z" level=error msg="encountered an error cleaning up failed sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.209545 containerd[1759]: time="2025-03-19T11:49:01.208896122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.209684 kubelet[2671]: E0319 11:49:01.209144 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.209684 kubelet[2671]: E0319 11:49:01.209211 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:01.209684 kubelet[2671]: E0319 11:49:01.209244 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:01.209923 kubelet[2671]: E0319 11:49:01.209302 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kzddm" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" Mar 19 11:49:01.212685 containerd[1759]: time="2025-03-19T11:49:01.212643583Z" level=error msg="Failed to destroy network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.213288 containerd[1759]: time="2025-03-19T11:49:01.213253693Z" level=error msg="encountered an error cleaning up failed sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.213369 containerd[1759]: time="2025-03-19T11:49:01.213328595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.213582 kubelet[2671]: E0319 11:49:01.213530 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:01.213696 kubelet[2671]: E0319 11:49:01.213603 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:01.213696 kubelet[2671]: E0319 11:49:01.213634 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:01.213848 kubelet[2671]: E0319 11:49:01.213688 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:49:01.968092 kubelet[2671]: E0319 11:49:01.968013 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:02.000406 kubelet[2671]: I0319 11:49:01.999374 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c" Mar 19 11:49:02.000590 containerd[1759]: time="2025-03-19T11:49:02.000318296Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:02.001496 containerd[1759]: time="2025-03-19T11:49:02.000726702Z" level=info msg="Ensure that sandbox 09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c in task-service has been cleanup successfully" Mar 19 11:49:02.001496 containerd[1759]: time="2025-03-19T11:49:02.000930206Z" level=info msg="TearDown network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" successfully" Mar 19 11:49:02.001496 containerd[1759]: time="2025-03-19T11:49:02.000951006Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" returns successfully" Mar 19 11:49:02.002637 containerd[1759]: time="2025-03-19T11:49:02.002309928Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:02.002637 containerd[1759]: time="2025-03-19T11:49:02.002403630Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:02.002637 containerd[1759]: time="2025-03-19T11:49:02.002459631Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:02.003400 containerd[1759]: time="2025-03-19T11:49:02.003219043Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:02.003946 containerd[1759]: time="2025-03-19T11:49:02.003870554Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:02.004472 containerd[1759]: time="2025-03-19T11:49:02.003892754Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:02.005648 containerd[1759]: time="2025-03-19T11:49:02.005213176Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:02.005648 containerd[1759]: time="2025-03-19T11:49:02.005445080Z" level=info msg="Ensure that sandbox 7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c in task-service has been cleanup successfully" Mar 19 11:49:02.005761 kubelet[2671]: I0319 11:49:02.004705 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c" Mar 19 11:49:02.005847 containerd[1759]: time="2025-03-19T11:49:02.005634883Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:02.005847 containerd[1759]: time="2025-03-19T11:49:02.005754085Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:02.005847 containerd[1759]: time="2025-03-19T11:49:02.005816486Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:02.005966 containerd[1759]: time="2025-03-19T11:49:02.005915287Z" level=info msg="TearDown network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" successfully" Mar 19 11:49:02.005966 containerd[1759]: time="2025-03-19T11:49:02.005929888Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" returns successfully" Mar 19 11:49:02.007520 containerd[1759]: time="2025-03-19T11:49:02.007492713Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:02.007607 containerd[1759]: time="2025-03-19T11:49:02.007582215Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:02.007607 containerd[1759]: time="2025-03-19T11:49:02.007598015Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:02.007697 containerd[1759]: time="2025-03-19T11:49:02.007679116Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:02.007788 containerd[1759]: time="2025-03-19T11:49:02.007756718Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:02.007851 containerd[1759]: time="2025-03-19T11:49:02.007789618Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:02.008468 containerd[1759]: time="2025-03-19T11:49:02.008325727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:5,}" Mar 19 11:49:02.009039 containerd[1759]: time="2025-03-19T11:49:02.009007038Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:02.009120 containerd[1759]: time="2025-03-19T11:49:02.009101840Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:02.009164 containerd[1759]: time="2025-03-19T11:49:02.009122040Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:02.009746 containerd[1759]: time="2025-03-19T11:49:02.009682349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:3,}" Mar 19 11:49:02.025430 systemd[1]: run-netns-cni\x2da76b019d\x2d5fb1\x2d2353\x2dc98d\x2dd5df4e43d3f1.mount: Deactivated successfully. Mar 19 11:49:02.026324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c-shm.mount: Deactivated successfully. Mar 19 11:49:02.026422 systemd[1]: run-netns-cni\x2d6a5bd9c6\x2d43bd\x2db898\x2de827\x2d8505063a4aa0.mount: Deactivated successfully. Mar 19 11:49:02.026504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c-shm.mount: Deactivated successfully. Mar 19 11:49:02.208355 containerd[1759]: time="2025-03-19T11:49:02.208285105Z" level=error msg="Failed to destroy network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.210808 containerd[1759]: time="2025-03-19T11:49:02.208916515Z" level=error msg="encountered an error cleaning up failed sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.210808 containerd[1759]: time="2025-03-19T11:49:02.209008717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.212057 kubelet[2671]: E0319 11:49:02.209560 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.212057 kubelet[2671]: E0319 11:49:02.209651 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:02.212057 kubelet[2671]: E0319 11:49:02.209680 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:02.212225 kubelet[2671]: E0319 11:49:02.209754 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kzddm" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" Mar 19 11:49:02.212496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d-shm.mount: Deactivated successfully. Mar 19 11:49:02.239616 containerd[1759]: time="2025-03-19T11:49:02.239472016Z" level=error msg="Failed to destroy network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.241098 containerd[1759]: time="2025-03-19T11:49:02.241049842Z" level=error msg="encountered an error cleaning up failed sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.241207 containerd[1759]: time="2025-03-19T11:49:02.241149344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.242447 kubelet[2671]: E0319 11:49:02.241757 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:02.242447 kubelet[2671]: E0319 11:49:02.241875 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:02.242447 kubelet[2671]: E0319 11:49:02.241911 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:02.242682 kubelet[2671]: E0319 11:49:02.241991 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:49:02.968657 kubelet[2671]: E0319 11:49:02.968596 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:03.010462 kubelet[2671]: I0319 11:49:03.010422 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5" Mar 19 11:49:03.011560 containerd[1759]: time="2025-03-19T11:49:03.011510772Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" Mar 19 11:49:03.012193 containerd[1759]: time="2025-03-19T11:49:03.011723875Z" level=info msg="Ensure that sandbox f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5 in task-service has been cleanup successfully" Mar 19 11:49:03.012193 containerd[1759]: time="2025-03-19T11:49:03.011951679Z" level=info msg="TearDown network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" successfully" Mar 19 11:49:03.012193 containerd[1759]: time="2025-03-19T11:49:03.011974680Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" returns successfully" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.012904695Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.013005396Z" level=info msg="TearDown network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" successfully" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.013020597Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" returns successfully" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.013449904Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.013528705Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:03.013791 containerd[1759]: time="2025-03-19T11:49:03.013542705Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:03.014079 containerd[1759]: time="2025-03-19T11:49:03.013933612Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:03.014079 containerd[1759]: time="2025-03-19T11:49:03.014015313Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:03.014079 containerd[1759]: time="2025-03-19T11:49:03.014029213Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:03.014585 containerd[1759]: time="2025-03-19T11:49:03.014562722Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:03.014852 containerd[1759]: time="2025-03-19T11:49:03.014822626Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:03.014852 containerd[1759]: time="2025-03-19T11:49:03.014842327Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:03.016074 kubelet[2671]: I0319 11:49:03.015140 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d" Mar 19 11:49:03.016171 containerd[1759]: time="2025-03-19T11:49:03.015141931Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:03.016171 containerd[1759]: time="2025-03-19T11:49:03.015418936Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:03.016171 containerd[1759]: time="2025-03-19T11:49:03.015435836Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:03.016903 containerd[1759]: time="2025-03-19T11:49:03.016463253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:6,}" Mar 19 11:49:03.023365 containerd[1759]: time="2025-03-19T11:49:03.022857758Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" Mar 19 11:49:03.023365 containerd[1759]: time="2025-03-19T11:49:03.023129462Z" level=info msg="Ensure that sandbox 6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d in task-service has been cleanup successfully" Mar 19 11:49:03.023489 containerd[1759]: time="2025-03-19T11:49:03.023395867Z" level=info msg="TearDown network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" successfully" Mar 19 11:49:03.023489 containerd[1759]: time="2025-03-19T11:49:03.023415667Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" returns successfully" Mar 19 11:49:03.024919 containerd[1759]: time="2025-03-19T11:49:03.024888991Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:03.025001 containerd[1759]: time="2025-03-19T11:49:03.024988493Z" level=info msg="TearDown network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" successfully" Mar 19 11:49:03.025054 containerd[1759]: time="2025-03-19T11:49:03.025003693Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" returns successfully" Mar 19 11:49:03.026361 systemd[1]: run-netns-cni\x2da1384846\x2d8856\x2d3556\x2d1d5e\x2dc4689d9d7b3d.mount: Deactivated successfully. Mar 19 11:49:03.027054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5-shm.mount: Deactivated successfully. Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.027415533Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.027496834Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.027510434Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.028194645Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.028277147Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:03.028394 containerd[1759]: time="2025-03-19T11:49:03.028291347Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:03.031917 systemd[1]: run-netns-cni\x2d08606206\x2d8bc3\x2d400a\x2dee98\x2d8e00ad31ddda.mount: Deactivated successfully. Mar 19 11:49:03.037210 containerd[1759]: time="2025-03-19T11:49:03.037171993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:4,}" Mar 19 11:49:03.179102 containerd[1759]: time="2025-03-19T11:49:03.178862815Z" level=error msg="Failed to destroy network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.179366 containerd[1759]: time="2025-03-19T11:49:03.179326023Z" level=error msg="encountered an error cleaning up failed sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.179457 containerd[1759]: time="2025-03-19T11:49:03.179418024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.179978 kubelet[2671]: E0319 11:49:03.179714 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.179978 kubelet[2671]: E0319 11:49:03.179807 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:03.179978 kubelet[2671]: E0319 11:49:03.179839 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z4bfg" Mar 19 11:49:03.180137 kubelet[2671]: E0319 11:49:03.179894 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z4bfg_calico-system(c6bdfd40-fbd5-495c-8be7-71a16b2a295b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z4bfg" podUID="c6bdfd40-fbd5-495c-8be7-71a16b2a295b" Mar 19 11:49:03.227274 containerd[1759]: time="2025-03-19T11:49:03.226921603Z" level=error msg="Failed to destroy network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.230144 containerd[1759]: time="2025-03-19T11:49:03.229674748Z" level=error msg="encountered an error cleaning up failed sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.230144 containerd[1759]: time="2025-03-19T11:49:03.229887652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.230521 kubelet[2671]: E0319 11:49:03.230472 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:49:03.230620 kubelet[2671]: E0319 11:49:03.230552 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:03.230620 kubelet[2671]: E0319 11:49:03.230590 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kzddm" Mar 19 11:49:03.230710 kubelet[2671]: E0319 11:49:03.230673 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kzddm_default(8742a208-721b-4b01-b5fc-fcfe2d1cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kzddm" podUID="8742a208-721b-4b01-b5fc-fcfe2d1cf6fa" Mar 19 11:49:03.441072 containerd[1759]: time="2025-03-19T11:49:03.441006013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:03.443834 containerd[1759]: time="2025-03-19T11:49:03.443748858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 19 11:49:03.447155 containerd[1759]: time="2025-03-19T11:49:03.447109213Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:03.451690 containerd[1759]: time="2025-03-19T11:49:03.451651487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:03.452678 containerd[1759]: time="2025-03-19T11:49:03.452227597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 10.488281652s" Mar 19 11:49:03.452678 containerd[1759]: time="2025-03-19T11:49:03.452268997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 19 11:49:03.460230 containerd[1759]: time="2025-03-19T11:49:03.460191927Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 19 11:49:03.512052 containerd[1759]: time="2025-03-19T11:49:03.511903975Z" level=info msg="CreateContainer within sandbox \"b3f271d2ade65d58c0ba4c06465f09f598ba4313c97af0fee05893216b3a0e9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213\"" Mar 19 11:49:03.513056 containerd[1759]: time="2025-03-19T11:49:03.513021793Z" level=info msg="StartContainer for \"2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213\"" Mar 19 11:49:03.539962 systemd[1]: Started cri-containerd-2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213.scope - libcontainer container 2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213. Mar 19 11:49:03.572807 containerd[1759]: time="2025-03-19T11:49:03.572749972Z" level=info msg="StartContainer for \"2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213\" returns successfully" Mar 19 11:49:03.692039 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 19 11:49:03.692175 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 19 11:49:03.969685 kubelet[2671]: E0319 11:49:03.969610 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:04.034809 kubelet[2671]: I0319 11:49:04.034425 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f" Mar 19 11:49:04.035344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052-shm.mount: Deactivated successfully. Mar 19 11:49:04.035460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f-shm.mount: Deactivated successfully. Mar 19 11:49:04.035539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930055198.mount: Deactivated successfully. Mar 19 11:49:04.040785 containerd[1759]: time="2025-03-19T11:49:04.036821480Z" level=info msg="StopPodSandbox for \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\"" Mar 19 11:49:04.040785 containerd[1759]: time="2025-03-19T11:49:04.037064284Z" level=info msg="Ensure that sandbox d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f in task-service has been cleanup successfully" Mar 19 11:49:04.040785 containerd[1759]: time="2025-03-19T11:49:04.037274487Z" level=info msg="TearDown network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" successfully" Mar 19 11:49:04.040785 containerd[1759]: time="2025-03-19T11:49:04.037293587Z" level=info msg="StopPodSandbox for \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" returns successfully" Mar 19 11:49:04.047792 kubelet[2671]: I0319 11:49:04.041746 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.042694076Z" level=info msg="StopPodSandbox for \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\"" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.042944180Z" level=info msg="Ensure that sandbox 1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052 in task-service has been cleanup successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.045787527Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.045875828Z" level=info msg="TearDown network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.045889828Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" returns successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.046569539Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.046656541Z" level=info msg="TearDown network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.046671341Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" returns successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.047109648Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.047208650Z" level=info msg="TearDown network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.047289851Z" level=info msg="StopPodSandbox for \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" returns successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.047442654Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:04.047897 containerd[1759]: time="2025-03-19T11:49:04.047461254Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:04.042476 systemd[1]: run-netns-cni\x2d294f8ca4\x2d3055\x2db832\x2dac81\x2d0a326bf5e6f1.mount: Deactivated successfully. Mar 19 11:49:04.048466 containerd[1759]: time="2025-03-19T11:49:04.048259167Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" Mar 19 11:49:04.048466 containerd[1759]: time="2025-03-19T11:49:04.048350669Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:04.048466 containerd[1759]: time="2025-03-19T11:49:04.048440470Z" level=info msg="TearDown network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" successfully" Mar 19 11:49:04.048466 containerd[1759]: time="2025-03-19T11:49:04.048454070Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" returns successfully" Mar 19 11:49:04.046556 systemd[1]: run-netns-cni\x2df658a049\x2da765\x2d803a\x2d220c\x2d6e230162218c.mount: Deactivated successfully. Mar 19 11:49:04.048703 containerd[1759]: time="2025-03-19T11:49:04.048522172Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:04.048703 containerd[1759]: time="2025-03-19T11:49:04.048536072Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:04.049613 containerd[1759]: time="2025-03-19T11:49:04.049056980Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:04.049613 containerd[1759]: time="2025-03-19T11:49:04.049180282Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:04.049613 containerd[1759]: time="2025-03-19T11:49:04.049210483Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:04.049761 containerd[1759]: time="2025-03-19T11:49:04.049650990Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:04.049830 containerd[1759]: time="2025-03-19T11:49:04.049795492Z" level=info msg="TearDown network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" successfully" Mar 19 11:49:04.049830 containerd[1759]: time="2025-03-19T11:49:04.049811393Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" returns successfully" Mar 19 11:49:04.049914 containerd[1759]: time="2025-03-19T11:49:04.049678990Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:04.051678 containerd[1759]: time="2025-03-19T11:49:04.050723608Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:04.051678 containerd[1759]: time="2025-03-19T11:49:04.050900210Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:04.051678 containerd[1759]: time="2025-03-19T11:49:04.051174515Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:04.052044 containerd[1759]: time="2025-03-19T11:49:04.051956128Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:04.052044 containerd[1759]: time="2025-03-19T11:49:04.051980928Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:04.052415 containerd[1759]: time="2025-03-19T11:49:04.052082130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:7,}" Mar 19 11:49:04.052486 containerd[1759]: time="2025-03-19T11:49:04.052440236Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:04.052540 containerd[1759]: time="2025-03-19T11:49:04.052521637Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:04.052584 containerd[1759]: time="2025-03-19T11:49:04.052538737Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:04.053485 containerd[1759]: time="2025-03-19T11:49:04.053457852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:5,}" Mar 19 11:49:04.259948 systemd-networkd[1602]: cali809b5ac31d2: Link UP Mar 19 11:49:04.260742 systemd-networkd[1602]: cali809b5ac31d2: Gained carrier Mar 19 11:49:04.269836 kubelet[2671]: I0319 11:49:04.269588 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rgfrj" podStartSLOduration=7.647738336 podStartE2EDuration="44.26955791s" podCreationTimestamp="2025-03-19 11:48:20 +0000 UTC" firstStartedPulling="2025-03-19 11:48:26.831332338 +0000 UTC m=+7.261011537" lastFinishedPulling="2025-03-19 11:49:03.453151912 +0000 UTC m=+43.882831111" observedRunningTime="2025-03-19 11:49:04.045616024 +0000 UTC m=+44.475295323" watchObservedRunningTime="2025-03-19 11:49:04.26955791 +0000 UTC m=+44.699237209" Mar 19 11:49:04.273191 systemd-networkd[1602]: cali2a18744690f: Link UP Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.155 [INFO][3653] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.165 [INFO][3653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0 nginx-deployment-85f456d6dd- default 8742a208-721b-4b01-b5fc-fcfe2d1cf6fa 1284 0 2025-03-19 11:48:54 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.19 nginx-deployment-85f456d6dd-kzddm eth0 default [] [] [kns.default ksa.default.default] cali809b5ac31d2 [] []}} ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.165 [INFO][3653] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.203 [INFO][3668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" HandleID="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.213 [INFO][3668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" HandleID="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aae0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"nginx-deployment-85f456d6dd-kzddm", "timestamp":"2025-03-19 11:49:04.203595722 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.213 [INFO][3668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.214 [INFO][3668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.214 [INFO][3668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19' Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.215 [INFO][3668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.219 [INFO][3668] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.222 [INFO][3668] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.224 [INFO][3668] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.225 [INFO][3668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.225 [INFO][3668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.227 [INFO][3668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327 Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.231 [INFO][3668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.129/26] block=192.168.41.128/26 handle="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.129/26] handle="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" host="10.200.8.19" Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:49:04.273525 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.129/26] IPv6=[] ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" HandleID="k8s-pod-network.2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.241 [INFO][3653] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8742a208-721b-4b01-b5fc-fcfe2d1cf6fa", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-kzddm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali809b5ac31d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.241 [INFO][3653] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.129/32] ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.241 [INFO][3653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali809b5ac31d2 ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.258 [INFO][3653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.258 [INFO][3653] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8742a208-721b-4b01-b5fc-fcfe2d1cf6fa", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327", Pod:"nginx-deployment-85f456d6dd-kzddm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali809b5ac31d2", MAC:"ae:fb:c9:6f:61:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:04.275390 containerd[1759]: 2025-03-19 11:49:04.270 [INFO][3653] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327" Namespace="default" Pod="nginx-deployment-85f456d6dd-kzddm" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kzddm-eth0" Mar 19 11:49:04.273959 systemd-networkd[1602]: cali2a18744690f: Gained carrier Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.149 [INFO][3642] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.165 [INFO][3642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-csi--node--driver--z4bfg-eth0 csi-node-driver- calico-system c6bdfd40-fbd5-495c-8be7-71a16b2a295b 1169 0 2025-03-19 11:48:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.8.19 csi-node-driver-z4bfg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a18744690f [] []}} ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.165 [INFO][3642] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.203 [INFO][3666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" HandleID="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Workload="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.213 [INFO][3666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" HandleID="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Workload="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd180), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.19", "pod":"csi-node-driver-z4bfg", "timestamp":"2025-03-19 11:49:04.203415719 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.213 [INFO][3666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.238 [INFO][3666] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19' Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.240 [INFO][3666] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.243 [INFO][3666] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.247 [INFO][3666] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.249 [INFO][3666] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.250 [INFO][3666] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.250 [INFO][3666] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.252 [INFO][3666] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.256 [INFO][3666] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.266 [INFO][3666] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.130/26] block=192.168.41.128/26 handle="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.266 [INFO][3666] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.130/26] handle="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" host="10.200.8.19" Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.266 [INFO][3666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:49:04.288625 containerd[1759]: 2025-03-19 11:49:04.266 [INFO][3666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.130/26] IPv6=[] ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" HandleID="k8s-pod-network.36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Workload="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.270 [INFO][3642] cni-plugin/k8s.go 386: Populated endpoint ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-csi--node--driver--z4bfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6bdfd40-fbd5-495c-8be7-71a16b2a295b", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"csi-node-driver-z4bfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a18744690f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.270 [INFO][3642] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.130/32] ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.270 [INFO][3642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a18744690f ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.276 [INFO][3642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.276 [INFO][3642] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-csi--node--driver--z4bfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6bdfd40-fbd5-495c-8be7-71a16b2a295b", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec", Pod:"csi-node-driver-z4bfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a18744690f", MAC:"a2:e8:c6:ed:d3:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:04.289637 containerd[1759]: 2025-03-19 11:49:04.287 [INFO][3642] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec" Namespace="calico-system" Pod="csi-node-driver-z4bfg" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--z4bfg-eth0" Mar 19 11:49:04.310098 containerd[1759]: time="2025-03-19T11:49:04.309727872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:49:04.310098 containerd[1759]: time="2025-03-19T11:49:04.309878174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:49:04.310098 containerd[1759]: time="2025-03-19T11:49:04.309936175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:04.310540 containerd[1759]: time="2025-03-19T11:49:04.310202680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:04.324783 containerd[1759]: time="2025-03-19T11:49:04.324440814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:49:04.324783 containerd[1759]: time="2025-03-19T11:49:04.324520116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:49:04.324783 containerd[1759]: time="2025-03-19T11:49:04.324536816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:04.325048 containerd[1759]: time="2025-03-19T11:49:04.324780020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:04.333301 systemd[1]: Started cri-containerd-2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327.scope - libcontainer container 2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327. Mar 19 11:49:04.353931 systemd[1]: Started cri-containerd-36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec.scope - libcontainer container 36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec. Mar 19 11:49:04.389916 containerd[1759]: time="2025-03-19T11:49:04.389729991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z4bfg,Uid:c6bdfd40-fbd5-495c-8be7-71a16b2a295b,Namespace:calico-system,Attempt:7,} returns sandbox id \"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec\"" Mar 19 11:49:04.393083 containerd[1759]: time="2025-03-19T11:49:04.392973844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 19 11:49:04.399843 containerd[1759]: time="2025-03-19T11:49:04.399591453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kzddm,Uid:8742a208-721b-4b01-b5fc-fcfe2d1cf6fa,Namespace:default,Attempt:5,} returns sandbox id \"2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327\"" Mar 19 11:49:04.970618 kubelet[2671]: E0319 11:49:04.970542 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:05.259001 kernel: bpftool[3905]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 19 11:49:05.339924 systemd-networkd[1602]: cali809b5ac31d2: Gained IPv6LL Mar 19 11:49:05.568125 systemd-networkd[1602]: vxlan.calico: Link UP Mar 19 11:49:05.568371 systemd-networkd[1602]: vxlan.calico: Gained carrier Mar 19 11:49:05.825880 containerd[1759]: time="2025-03-19T11:49:05.825825765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:05.829029 containerd[1759]: time="2025-03-19T11:49:05.828876615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 19 11:49:05.833124 containerd[1759]: time="2025-03-19T11:49:05.833033784Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:05.847790 containerd[1759]: time="2025-03-19T11:49:05.847276119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:05.849616 containerd[1759]: time="2025-03-19T11:49:05.849578857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.456542912s" Mar 19 11:49:05.850453 containerd[1759]: time="2025-03-19T11:49:05.850426571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 19 11:49:05.852489 containerd[1759]: time="2025-03-19T11:49:05.852085198Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 11:49:05.853780 containerd[1759]: time="2025-03-19T11:49:05.853584623Z" level=info msg="CreateContainer within sandbox \"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 19 11:49:05.971628 kubelet[2671]: E0319 11:49:05.971561 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:06.161635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880398880.mount: Deactivated successfully. Mar 19 11:49:06.300046 systemd-networkd[1602]: cali2a18744690f: Gained IPv6LL Mar 19 11:49:06.406736 containerd[1759]: time="2025-03-19T11:49:06.406665641Z" level=info msg="CreateContainer within sandbox \"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f5fcbe58fff1768b9d7bb1eddde1b765f446146bbfbaf7a16e131afca0d5e5aa\"" Mar 19 11:49:06.407527 containerd[1759]: time="2025-03-19T11:49:06.407478254Z" level=info msg="StartContainer for \"f5fcbe58fff1768b9d7bb1eddde1b765f446146bbfbaf7a16e131afca0d5e5aa\"" Mar 19 11:49:06.453511 systemd[1]: Started cri-containerd-f5fcbe58fff1768b9d7bb1eddde1b765f446146bbfbaf7a16e131afca0d5e5aa.scope - libcontainer container f5fcbe58fff1768b9d7bb1eddde1b765f446146bbfbaf7a16e131afca0d5e5aa. Mar 19 11:49:06.491721 containerd[1759]: time="2025-03-19T11:49:06.491665042Z" level=info msg="StartContainer for \"f5fcbe58fff1768b9d7bb1eddde1b765f446146bbfbaf7a16e131afca0d5e5aa\" returns successfully" Mar 19 11:49:06.876188 systemd-networkd[1602]: vxlan.calico: Gained IPv6LL Mar 19 11:49:06.971789 kubelet[2671]: E0319 11:49:06.971738 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:07.972305 kubelet[2671]: E0319 11:49:07.972239 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:08.973108 kubelet[2671]: E0319 11:49:08.973018 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:09.973464 kubelet[2671]: E0319 11:49:09.973413 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:10.973667 kubelet[2671]: E0319 11:49:10.973611 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:11.974822 kubelet[2671]: E0319 11:49:11.974744 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:12.975805 kubelet[2671]: E0319 11:49:12.975155 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:13.470777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937347186.mount: Deactivated successfully. Mar 19 11:49:13.976057 kubelet[2671]: E0319 11:49:13.975992 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:14.780758 containerd[1759]: time="2025-03-19T11:49:14.780698124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:14.783319 containerd[1759]: time="2025-03-19T11:49:14.783249667Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 19 11:49:14.786796 containerd[1759]: time="2025-03-19T11:49:14.786719725Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:14.794257 containerd[1759]: time="2025-03-19T11:49:14.794113348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:14.795286 containerd[1759]: time="2025-03-19T11:49:14.795129865Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 8.943009565s" Mar 19 11:49:14.795286 containerd[1759]: time="2025-03-19T11:49:14.795170165Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 11:49:14.796589 containerd[1759]: time="2025-03-19T11:49:14.796364885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 19 11:49:14.798487 containerd[1759]: time="2025-03-19T11:49:14.798455620Z" level=info msg="CreateContainer within sandbox \"2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 19 11:49:14.848166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194135896.mount: Deactivated successfully. Mar 19 11:49:14.853869 containerd[1759]: time="2025-03-19T11:49:14.853823842Z" level=info msg="CreateContainer within sandbox \"2bac700fc6d60c536365c4ced1e6699c560ae38a04a867aa70290329137d5327\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cd0db848febc0a4001c1a3beb127d3c0fa31bfec874a49e8f5c60ff648e4d559\"" Mar 19 11:49:14.854510 containerd[1759]: time="2025-03-19T11:49:14.854479753Z" level=info msg="StartContainer for \"cd0db848febc0a4001c1a3beb127d3c0fa31bfec874a49e8f5c60ff648e4d559\"" Mar 19 11:49:14.895958 systemd[1]: Started cri-containerd-cd0db848febc0a4001c1a3beb127d3c0fa31bfec874a49e8f5c60ff648e4d559.scope - libcontainer container cd0db848febc0a4001c1a3beb127d3c0fa31bfec874a49e8f5c60ff648e4d559. Mar 19 11:49:14.932605 containerd[1759]: time="2025-03-19T11:49:14.932547452Z" level=info msg="StartContainer for \"cd0db848febc0a4001c1a3beb127d3c0fa31bfec874a49e8f5c60ff648e4d559\" returns successfully" Mar 19 11:49:14.976580 kubelet[2671]: E0319 11:49:14.976516 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:15.977410 kubelet[2671]: E0319 11:49:15.977353 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:16.331396 containerd[1759]: time="2025-03-19T11:49:16.330950134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:16.332991 containerd[1759]: time="2025-03-19T11:49:16.332851166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 19 11:49:16.336501 containerd[1759]: time="2025-03-19T11:49:16.336314923Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:16.341068 containerd[1759]: time="2025-03-19T11:49:16.341037902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:16.341645 containerd[1759]: time="2025-03-19T11:49:16.341607512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.545206726s" Mar 19 11:49:16.341731 containerd[1759]: time="2025-03-19T11:49:16.341651812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 19 11:49:16.344162 containerd[1759]: time="2025-03-19T11:49:16.344136154Z" level=info msg="CreateContainer within sandbox \"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 19 11:49:16.384842 containerd[1759]: time="2025-03-19T11:49:16.384794031Z" level=info msg="CreateContainer within sandbox \"36337c867ed19854cd8551e6a83579806fa6cd5fdc8262ab34721a86f456f9ec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"09d0e3dbe3081fd34daf95431844d8815c1ef072ef7e3db97c0a21f4a65d7aa2\"" Mar 19 11:49:16.385484 containerd[1759]: time="2025-03-19T11:49:16.385422641Z" level=info msg="StartContainer for \"09d0e3dbe3081fd34daf95431844d8815c1ef072ef7e3db97c0a21f4a65d7aa2\"" Mar 19 11:49:16.421957 systemd[1]: Started cri-containerd-09d0e3dbe3081fd34daf95431844d8815c1ef072ef7e3db97c0a21f4a65d7aa2.scope - libcontainer container 09d0e3dbe3081fd34daf95431844d8815c1ef072ef7e3db97c0a21f4a65d7aa2. Mar 19 11:49:16.457072 containerd[1759]: time="2025-03-19T11:49:16.457034533Z" level=info msg="StartContainer for \"09d0e3dbe3081fd34daf95431844d8815c1ef072ef7e3db97c0a21f4a65d7aa2\" returns successfully" Mar 19 11:49:16.955147 kubelet[2671]: I0319 11:49:16.955111 2671 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 19 11:49:16.955147 kubelet[2671]: I0319 11:49:16.955147 2671 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 19 11:49:16.977997 kubelet[2671]: E0319 11:49:16.977955 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:17.109503 kubelet[2671]: I0319 11:49:17.109439 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z4bfg" podStartSLOduration=45.158680792 podStartE2EDuration="57.109421195s" podCreationTimestamp="2025-03-19 11:48:20 +0000 UTC" firstStartedPulling="2025-03-19 11:49:04.392086029 +0000 UTC m=+44.821765328" lastFinishedPulling="2025-03-19 11:49:16.342826532 +0000 UTC m=+56.772505731" observedRunningTime="2025-03-19 11:49:17.109181991 +0000 UTC m=+57.538861290" watchObservedRunningTime="2025-03-19 11:49:17.109421195 +0000 UTC m=+57.539100394" Mar 19 11:49:17.109754 kubelet[2671]: I0319 11:49:17.109597 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-kzddm" podStartSLOduration=12.715095604 podStartE2EDuration="23.109589298s" podCreationTimestamp="2025-03-19 11:48:54 +0000 UTC" firstStartedPulling="2025-03-19 11:49:04.401687188 +0000 UTC m=+44.831366387" lastFinishedPulling="2025-03-19 11:49:14.796180882 +0000 UTC m=+55.225860081" observedRunningTime="2025-03-19 11:49:15.097253895 +0000 UTC m=+55.526933094" watchObservedRunningTime="2025-03-19 11:49:17.109589298 +0000 UTC m=+57.539268497" Mar 19 11:49:17.978675 kubelet[2671]: E0319 11:49:17.978611 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:18.979466 kubelet[2671]: E0319 11:49:18.979404 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:19.939203 kubelet[2671]: E0319 11:49:19.939134 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:19.979882 kubelet[2671]: E0319 11:49:19.979648 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:19.980406 containerd[1759]: time="2025-03-19T11:49:19.980375493Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:19.980720 containerd[1759]: time="2025-03-19T11:49:19.980502595Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:19.980720 containerd[1759]: time="2025-03-19T11:49:19.980518095Z" level=info msg="StopPodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:19.981086 containerd[1759]: time="2025-03-19T11:49:19.981058504Z" level=info msg="RemovePodSandbox for \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:19.981086 containerd[1759]: time="2025-03-19T11:49:19.981087705Z" level=info msg="Forcibly stopping sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\"" Mar 19 11:49:19.981230 containerd[1759]: time="2025-03-19T11:49:19.981178706Z" level=info msg="TearDown network for sandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" successfully" Mar 19 11:49:19.987802 containerd[1759]: time="2025-03-19T11:49:19.987755516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:19.987953 containerd[1759]: time="2025-03-19T11:49:19.987920718Z" level=info msg="RemovePodSandbox \"f6852dd4684e53c4995d661574cf91f109c144efd6021fff3667a1e889b8b336\" returns successfully" Mar 19 11:49:19.988349 containerd[1759]: time="2025-03-19T11:49:19.988317025Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:19.988601 containerd[1759]: time="2025-03-19T11:49:19.988419327Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:19.988601 containerd[1759]: time="2025-03-19T11:49:19.988435527Z" level=info msg="StopPodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:19.988720 containerd[1759]: time="2025-03-19T11:49:19.988682331Z" level=info msg="RemovePodSandbox for \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:19.988720 containerd[1759]: time="2025-03-19T11:49:19.988706831Z" level=info msg="Forcibly stopping sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\"" Mar 19 11:49:19.988840 containerd[1759]: time="2025-03-19T11:49:19.988798233Z" level=info msg="TearDown network for sandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" successfully" Mar 19 11:49:19.995543 containerd[1759]: time="2025-03-19T11:49:19.995515745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:19.995639 containerd[1759]: time="2025-03-19T11:49:19.995554145Z" level=info msg="RemovePodSandbox \"f0881d0fd8d321c7951f59c7f3c6ae303bdecd65f53b3cf42d6fe02aeba5b7bc\" returns successfully" Mar 19 11:49:19.995912 containerd[1759]: time="2025-03-19T11:49:19.995883751Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:19.996000 containerd[1759]: time="2025-03-19T11:49:19.995971652Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:19.996000 containerd[1759]: time="2025-03-19T11:49:19.995986853Z" level=info msg="StopPodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:19.996296 containerd[1759]: time="2025-03-19T11:49:19.996236157Z" level=info msg="RemovePodSandbox for \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:19.996296 containerd[1759]: time="2025-03-19T11:49:19.996267357Z" level=info msg="Forcibly stopping sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\"" Mar 19 11:49:19.996413 containerd[1759]: time="2025-03-19T11:49:19.996370259Z" level=info msg="TearDown network for sandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" successfully" Mar 19 11:49:20.005369 containerd[1759]: time="2025-03-19T11:49:20.005342308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.005461 containerd[1759]: time="2025-03-19T11:49:20.005382309Z" level=info msg="RemovePodSandbox \"a00b16ed51487afc92c58418faf4077253ffd990dee588538f85557bad7e4afd\" returns successfully" Mar 19 11:49:20.005711 containerd[1759]: time="2025-03-19T11:49:20.005678914Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:20.005862 containerd[1759]: time="2025-03-19T11:49:20.005808416Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:20.005862 containerd[1759]: time="2025-03-19T11:49:20.005853817Z" level=info msg="StopPodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:20.006235 containerd[1759]: time="2025-03-19T11:49:20.006190422Z" level=info msg="RemovePodSandbox for \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:20.006235 containerd[1759]: time="2025-03-19T11:49:20.006219023Z" level=info msg="Forcibly stopping sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\"" Mar 19 11:49:20.006360 containerd[1759]: time="2025-03-19T11:49:20.006291024Z" level=info msg="TearDown network for sandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" successfully" Mar 19 11:49:20.012948 containerd[1759]: time="2025-03-19T11:49:20.012920734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.013029 containerd[1759]: time="2025-03-19T11:49:20.012961635Z" level=info msg="RemovePodSandbox \"541a1f830a89a7995ddcccd39dfc2bc402b2bcbbe5c99d520a9424c2d86df9bf\" returns successfully" Mar 19 11:49:20.013294 containerd[1759]: time="2025-03-19T11:49:20.013258840Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:20.013365 containerd[1759]: time="2025-03-19T11:49:20.013351542Z" level=info msg="TearDown network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" successfully" Mar 19 11:49:20.013418 containerd[1759]: time="2025-03-19T11:49:20.013366742Z" level=info msg="StopPodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" returns successfully" Mar 19 11:49:20.013669 containerd[1759]: time="2025-03-19T11:49:20.013647447Z" level=info msg="RemovePodSandbox for \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:20.013793 containerd[1759]: time="2025-03-19T11:49:20.013756448Z" level=info msg="Forcibly stopping sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\"" Mar 19 11:49:20.013890 containerd[1759]: time="2025-03-19T11:49:20.013848750Z" level=info msg="TearDown network for sandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" successfully" Mar 19 11:49:20.024547 containerd[1759]: time="2025-03-19T11:49:20.024501427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.024678 containerd[1759]: time="2025-03-19T11:49:20.024553928Z" level=info msg="RemovePodSandbox \"09bd59c6fd2970d4747bc981af8ce45f79acff8d1775fb04e041dec6592b252c\" returns successfully" Mar 19 11:49:20.024985 containerd[1759]: time="2025-03-19T11:49:20.024953635Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" Mar 19 11:49:20.025082 containerd[1759]: time="2025-03-19T11:49:20.025056537Z" level=info msg="TearDown network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" successfully" Mar 19 11:49:20.025082 containerd[1759]: time="2025-03-19T11:49:20.025073237Z" level=info msg="StopPodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" returns successfully" Mar 19 11:49:20.025459 containerd[1759]: time="2025-03-19T11:49:20.025397142Z" level=info msg="RemovePodSandbox for \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" Mar 19 11:49:20.025459 containerd[1759]: time="2025-03-19T11:49:20.025428143Z" level=info msg="Forcibly stopping sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\"" Mar 19 11:49:20.025570 containerd[1759]: time="2025-03-19T11:49:20.025507144Z" level=info msg="TearDown network for sandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" successfully" Mar 19 11:49:20.033961 containerd[1759]: time="2025-03-19T11:49:20.033932884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.034053 containerd[1759]: time="2025-03-19T11:49:20.033975585Z" level=info msg="RemovePodSandbox \"f0b5b5b756b8745b2a68a9f355fe73f6e96d21ca4ae0a9cd54ede1c4b367fed5\" returns successfully" Mar 19 11:49:20.034323 containerd[1759]: time="2025-03-19T11:49:20.034285390Z" level=info msg="StopPodSandbox for \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\"" Mar 19 11:49:20.034397 containerd[1759]: time="2025-03-19T11:49:20.034383392Z" level=info msg="TearDown network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" successfully" Mar 19 11:49:20.034441 containerd[1759]: time="2025-03-19T11:49:20.034398392Z" level=info msg="StopPodSandbox for \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" returns successfully" Mar 19 11:49:20.034723 containerd[1759]: time="2025-03-19T11:49:20.034694797Z" level=info msg="RemovePodSandbox for \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\"" Mar 19 11:49:20.034827 containerd[1759]: time="2025-03-19T11:49:20.034728098Z" level=info msg="Forcibly stopping sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\"" Mar 19 11:49:20.034875 containerd[1759]: time="2025-03-19T11:49:20.034819799Z" level=info msg="TearDown network for sandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" successfully" Mar 19 11:49:20.044178 containerd[1759]: time="2025-03-19T11:49:20.044141854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.044251 containerd[1759]: time="2025-03-19T11:49:20.044184155Z" level=info msg="RemovePodSandbox \"d87284c116b31b400858b7da2844590837c5507ba814c376c427253d836ae57f\" returns successfully" Mar 19 11:49:20.044539 containerd[1759]: time="2025-03-19T11:49:20.044505260Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:20.044627 containerd[1759]: time="2025-03-19T11:49:20.044603162Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:20.044627 containerd[1759]: time="2025-03-19T11:49:20.044619562Z" level=info msg="StopPodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:20.044941 containerd[1759]: time="2025-03-19T11:49:20.044918067Z" level=info msg="RemovePodSandbox for \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:20.045016 containerd[1759]: time="2025-03-19T11:49:20.044958068Z" level=info msg="Forcibly stopping sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\"" Mar 19 11:49:20.045127 containerd[1759]: time="2025-03-19T11:49:20.045034369Z" level=info msg="TearDown network for sandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" successfully" Mar 19 11:49:20.053220 containerd[1759]: time="2025-03-19T11:49:20.053177105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.053314 containerd[1759]: time="2025-03-19T11:49:20.053229606Z" level=info msg="RemovePodSandbox \"ab9de379e95816bf3214bc6b78b0073e264ee0935f1bd640eb4e055324a30ab6\" returns successfully" Mar 19 11:49:20.053709 containerd[1759]: time="2025-03-19T11:49:20.053635212Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:20.053798 containerd[1759]: time="2025-03-19T11:49:20.053739614Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:20.053798 containerd[1759]: time="2025-03-19T11:49:20.053754814Z" level=info msg="StopPodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:20.054131 containerd[1759]: time="2025-03-19T11:49:20.054043219Z" level=info msg="RemovePodSandbox for \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:20.054131 containerd[1759]: time="2025-03-19T11:49:20.054072520Z" level=info msg="Forcibly stopping sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\"" Mar 19 11:49:20.054253 containerd[1759]: time="2025-03-19T11:49:20.054185521Z" level=info msg="TearDown network for sandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" successfully" Mar 19 11:49:20.064856 containerd[1759]: time="2025-03-19T11:49:20.064808598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.065000 containerd[1759]: time="2025-03-19T11:49:20.064865299Z" level=info msg="RemovePodSandbox \"74c56cd4368f3fab9f5ec19c2e0d25635d2e70c845adc8e1f865333f7429d2c5\" returns successfully" Mar 19 11:49:20.065367 containerd[1759]: time="2025-03-19T11:49:20.065341707Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:20.065456 containerd[1759]: time="2025-03-19T11:49:20.065442809Z" level=info msg="TearDown network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" successfully" Mar 19 11:49:20.065501 containerd[1759]: time="2025-03-19T11:49:20.065458609Z" level=info msg="StopPodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" returns successfully" Mar 19 11:49:20.065815 containerd[1759]: time="2025-03-19T11:49:20.065761814Z" level=info msg="RemovePodSandbox for \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:20.065891 containerd[1759]: time="2025-03-19T11:49:20.065814715Z" level=info msg="Forcibly stopping sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\"" Mar 19 11:49:20.065944 containerd[1759]: time="2025-03-19T11:49:20.065889616Z" level=info msg="TearDown network for sandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" successfully" Mar 19 11:49:20.075544 containerd[1759]: time="2025-03-19T11:49:20.075506076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.075620 containerd[1759]: time="2025-03-19T11:49:20.075551077Z" level=info msg="RemovePodSandbox \"7ded3e3404143140589e0d0c7c7aec7b9420bc868744971dc71c16d99958295c\" returns successfully" Mar 19 11:49:20.075930 containerd[1759]: time="2025-03-19T11:49:20.075897683Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" Mar 19 11:49:20.076009 containerd[1759]: time="2025-03-19T11:49:20.075986084Z" level=info msg="TearDown network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" successfully" Mar 19 11:49:20.076009 containerd[1759]: time="2025-03-19T11:49:20.076001485Z" level=info msg="StopPodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" returns successfully" Mar 19 11:49:20.076797 containerd[1759]: time="2025-03-19T11:49:20.076348790Z" level=info msg="RemovePodSandbox for \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" Mar 19 11:49:20.076797 containerd[1759]: time="2025-03-19T11:49:20.076384491Z" level=info msg="Forcibly stopping sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\"" Mar 19 11:49:20.076797 containerd[1759]: time="2025-03-19T11:49:20.076466492Z" level=info msg="TearDown network for sandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" successfully" Mar 19 11:49:20.082879 containerd[1759]: time="2025-03-19T11:49:20.082850999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.082986 containerd[1759]: time="2025-03-19T11:49:20.082892499Z" level=info msg="RemovePodSandbox \"6dd85ee62b6d5a4d74a9a30420f545fd512fa7ffda59218daba17d0da024781d\" returns successfully" Mar 19 11:49:20.083249 containerd[1759]: time="2025-03-19T11:49:20.083227505Z" level=info msg="StopPodSandbox for \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\"" Mar 19 11:49:20.083362 containerd[1759]: time="2025-03-19T11:49:20.083325807Z" level=info msg="TearDown network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" successfully" Mar 19 11:49:20.083362 containerd[1759]: time="2025-03-19T11:49:20.083342707Z" level=info msg="StopPodSandbox for \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" returns successfully" Mar 19 11:49:20.083628 containerd[1759]: time="2025-03-19T11:49:20.083603411Z" level=info msg="RemovePodSandbox for \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\"" Mar 19 11:49:20.083700 containerd[1759]: time="2025-03-19T11:49:20.083630412Z" level=info msg="Forcibly stopping sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\"" Mar 19 11:49:20.083746 containerd[1759]: time="2025-03-19T11:49:20.083703113Z" level=info msg="TearDown network for sandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" successfully" Mar 19 11:49:20.090758 containerd[1759]: time="2025-03-19T11:49:20.090729930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:49:20.090878 containerd[1759]: time="2025-03-19T11:49:20.090778231Z" level=info msg="RemovePodSandbox \"1851f93218d352fd48433fa3df866d1a0c3918936c642046789e719269f27052\" returns successfully" Mar 19 11:49:20.980085 kubelet[2671]: E0319 11:49:20.980031 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:21.980572 kubelet[2671]: E0319 11:49:21.980519 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:22.981757 kubelet[2671]: E0319 11:49:22.981683 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:23.982042 kubelet[2671]: E0319 11:49:23.981979 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:24.842632 kubelet[2671]: I0319 11:49:24.842583 2671 topology_manager.go:215] "Topology Admit Handler" podUID="671d41f6-014f-449f-a44c-4cb8013ceb2b" podNamespace="default" podName="nfs-server-provisioner-0" Mar 19 11:49:24.849045 systemd[1]: Created slice kubepods-besteffort-pod671d41f6_014f_449f_a44c_4cb8013ceb2b.slice - libcontainer container kubepods-besteffort-pod671d41f6_014f_449f_a44c_4cb8013ceb2b.slice. Mar 19 11:49:24.946993 kubelet[2671]: I0319 11:49:24.946893 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/671d41f6-014f-449f-a44c-4cb8013ceb2b-data\") pod \"nfs-server-provisioner-0\" (UID: \"671d41f6-014f-449f-a44c-4cb8013ceb2b\") " pod="default/nfs-server-provisioner-0" Mar 19 11:49:24.946993 kubelet[2671]: I0319 11:49:24.946948 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s69r\" (UniqueName: \"kubernetes.io/projected/671d41f6-014f-449f-a44c-4cb8013ceb2b-kube-api-access-9s69r\") pod \"nfs-server-provisioner-0\" (UID: \"671d41f6-014f-449f-a44c-4cb8013ceb2b\") " pod="default/nfs-server-provisioner-0" Mar 19 11:49:24.982850 kubelet[2671]: E0319 11:49:24.982790 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:25.153232 containerd[1759]: time="2025-03-19T11:49:25.153105558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:671d41f6-014f-449f-a44c-4cb8013ceb2b,Namespace:default,Attempt:0,}" Mar 19 11:49:25.305309 systemd-networkd[1602]: cali60e51b789ff: Link UP Mar 19 11:49:25.306227 systemd-networkd[1602]: cali60e51b789ff: Gained carrier Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.237 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 671d41f6-014f-449f-a44c-4cb8013ceb2b 1413 0 2025-03-19 11:49:24 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.19 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.238 [INFO][4206] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.263 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" HandleID="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.274 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" HandleID="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293500), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-19 11:49:25.26378337 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.274 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.274 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.274 [INFO][4217] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19' Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.276 [INFO][4217] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.279 [INFO][4217] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.283 [INFO][4217] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.284 [INFO][4217] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.286 [INFO][4217] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.286 [INFO][4217] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.288 [INFO][4217] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366 Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.292 [INFO][4217] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.300 [INFO][4217] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.131/26] block=192.168.41.128/26 handle="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.300 [INFO][4217] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.131/26] handle="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" host="10.200.8.19" Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.300 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:49:25.319187 containerd[1759]: 2025-03-19 11:49:25.300 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.131/26] IPv6=[] ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" HandleID="k8s-pod-network.50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.320156 containerd[1759]: 2025-03-19 11:49:25.302 [INFO][4206] cni-plugin/k8s.go 386: Populated endpoint ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"671d41f6-014f-449f-a44c-4cb8013ceb2b", ResourceVersion:"1413", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:25.320156 containerd[1759]: 2025-03-19 11:49:25.302 [INFO][4206] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.131/32] ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.320156 containerd[1759]: 2025-03-19 11:49:25.302 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.320156 containerd[1759]: 2025-03-19 11:49:25.306 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.320432 containerd[1759]: 2025-03-19 11:49:25.307 [INFO][4206] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"671d41f6-014f-449f-a44c-4cb8013ceb2b", ResourceVersion:"1413", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e6:c6:91:8e:86:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:49:25.320432 containerd[1759]: 2025-03-19 11:49:25.317 [INFO][4206] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:49:25.350072 containerd[1759]: time="2025-03-19T11:49:25.349957882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:49:25.350072 containerd[1759]: time="2025-03-19T11:49:25.350012383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:49:25.350072 containerd[1759]: time="2025-03-19T11:49:25.350026283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:25.350429 containerd[1759]: time="2025-03-19T11:49:25.350107884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:49:25.376937 systemd[1]: Started cri-containerd-50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366.scope - libcontainer container 50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366. Mar 19 11:49:25.416955 containerd[1759]: time="2025-03-19T11:49:25.416745376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:671d41f6-014f-449f-a44c-4cb8013ceb2b,Namespace:default,Attempt:0,} returns sandbox id \"50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366\"" Mar 19 11:49:25.419441 containerd[1759]: time="2025-03-19T11:49:25.418975112Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 19 11:49:25.983618 kubelet[2671]: E0319 11:49:25.983449 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:26.971981 systemd-networkd[1602]: cali60e51b789ff: Gained IPv6LL Mar 19 11:49:26.983827 kubelet[2671]: E0319 11:49:26.983787 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:27.984032 kubelet[2671]: E0319 11:49:27.983971 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:28.984218 kubelet[2671]: E0319 11:49:28.984156 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:29.984878 kubelet[2671]: E0319 11:49:29.984726 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:30.161983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598908592.mount: Deactivated successfully. Mar 19 11:49:30.985432 kubelet[2671]: E0319 11:49:30.985388 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:35.006030 kubelet[2671]: E0319 11:49:31.985947 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:35.006030 kubelet[2671]: E0319 11:49:32.986886 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:35.006030 kubelet[2671]: E0319 11:49:33.987950 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:35.006030 kubelet[2671]: E0319 11:49:34.989040 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:35.622514 systemd[1]: run-containerd-runc-k8s.io-2e6362300159bad7cfe6167e73b182198f0240d7d0993e6087ec0dd89fc23213-runc.YarYLy.mount: Deactivated successfully. Mar 19 11:49:35.989911 kubelet[2671]: E0319 11:49:35.989827 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:36.990704 kubelet[2671]: E0319 11:49:36.990607 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:37.991595 kubelet[2671]: E0319 11:49:37.991537 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:38.992134 kubelet[2671]: E0319 11:49:38.992070 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:39.938783 kubelet[2671]: E0319 11:49:39.938662 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:39.992546 kubelet[2671]: E0319 11:49:39.992481 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:40.993118 kubelet[2671]: E0319 11:49:40.993057 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:41.012761 containerd[1759]: time="2025-03-19T11:49:41.012703061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:41.018242 containerd[1759]: time="2025-03-19T11:49:41.017978346Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Mar 19 11:49:41.055224 containerd[1759]: time="2025-03-19T11:49:41.054865843Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:41.105855 containerd[1759]: time="2025-03-19T11:49:41.105784768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:49:41.113469 containerd[1759]: time="2025-03-19T11:49:41.113416191Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 15.694408778s" Mar 19 11:49:41.113469 containerd[1759]: time="2025-03-19T11:49:41.113471392Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 19 11:49:41.122491 containerd[1759]: time="2025-03-19T11:49:41.121398920Z" level=info msg="CreateContainer within sandbox \"50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 19 11:49:41.993584 kubelet[2671]: E0319 11:49:41.993509 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:42.815604 containerd[1759]: time="2025-03-19T11:49:42.815425244Z" level=info msg="CreateContainer within sandbox \"50b6d8d6777d8bae5bf28ab1bfb61464cf7a8c0e41bf21d1ec041122bdc80366\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7\"" Mar 19 11:49:42.816519 containerd[1759]: time="2025-03-19T11:49:42.816468061Z" level=info msg="StartContainer for \"c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7\"" Mar 19 11:49:42.857417 systemd[1]: run-containerd-runc-k8s.io-c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7-runc.bk9YNE.mount: Deactivated successfully. Mar 19 11:49:42.868194 systemd[1]: Started cri-containerd-c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7.scope - libcontainer container c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7. Mar 19 11:49:42.962543 containerd[1759]: time="2025-03-19T11:49:42.962483325Z" level=info msg="StartContainer for \"c9920320001a640e9bc121df5f371cd20c331d5de98ffc157209f6d473b3baf7\" returns successfully" Mar 19 11:49:42.993972 kubelet[2671]: E0319 11:49:42.993923 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:44.955430 kubelet[2671]: I0319 11:49:43.244273 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.547445169 podStartE2EDuration="19.244259086s" podCreationTimestamp="2025-03-19 11:49:24 +0000 UTC" firstStartedPulling="2025-03-19 11:49:25.418706908 +0000 UTC m=+65.848386107" lastFinishedPulling="2025-03-19 11:49:41.115520825 +0000 UTC m=+81.545200024" observedRunningTime="2025-03-19 11:49:43.244124484 +0000 UTC m=+83.673803783" watchObservedRunningTime="2025-03-19 11:49:43.244259086 +0000 UTC m=+83.673938285" Mar 19 11:49:44.955430 kubelet[2671]: E0319 11:49:43.994729 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:44.995608 kubelet[2671]: E0319 11:49:44.995536 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:45.996090 kubelet[2671]: E0319 11:49:45.996022 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:46.996607 kubelet[2671]: E0319 11:49:46.996527 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:47.132215 waagent[2014]: 2025-03-19T11:49:47.132132Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 19 11:49:47.139605 waagent[2014]: 2025-03-19T11:49:47.139542Z INFO ExtHandler Mar 19 11:49:47.139750 waagent[2014]: 2025-03-19T11:49:47.139667Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c2970011-9864-4df8-a6ba-3e9b6bb341fd eTag: 11238497675333984701 source: Fabric] Mar 19 11:49:47.140132 waagent[2014]: 2025-03-19T11:49:47.140073Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:49:47.140760 waagent[2014]: 2025-03-19T11:49:47.140700Z INFO ExtHandler Mar 19 11:49:47.140881 waagent[2014]: 2025-03-19T11:49:47.140832Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 19 11:49:47.195471 waagent[2014]: 2025-03-19T11:49:47.195398Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:49:47.277333 waagent[2014]: 2025-03-19T11:49:47.277158Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C24B3FB0DED2F2FB639789A8A8D755B8B0424711', 'hasPrivateKey': True} Mar 19 11:49:47.278009 waagent[2014]: 2025-03-19T11:49:47.277727Z INFO ExtHandler Downloaded certificate {'thumbprint': '0DEE508D8B968A25925DCF1A52553B62266784E0', 'hasPrivateKey': False} Mar 19 11:49:47.278407 waagent[2014]: 2025-03-19T11:49:47.278347Z INFO ExtHandler Fetch goal state completed Mar 19 11:49:47.278828 waagent[2014]: 2025-03-19T11:49:47.278750Z INFO ExtHandler ExtHandler Mar 19 11:49:47.278933 waagent[2014]: 2025-03-19T11:49:47.278880Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 4fd53443-c0a9-4814-859b-c12880d41d80 correlation f1a48f3a-fbe7-44db-ab05-794714bb00fa created: 2025-03-19T11:49:38.083778Z] Mar 19 11:49:47.279292 waagent[2014]: 2025-03-19T11:49:47.279216Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:49:47.280158 waagent[2014]: 2025-03-19T11:49:47.280079Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 19 11:49:47.997114 kubelet[2671]: E0319 11:49:47.997048 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:48.997970 kubelet[2671]: E0319 11:49:48.997908 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:49.998302 kubelet[2671]: E0319 11:49:49.998242 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:50.999299 kubelet[2671]: E0319 11:49:50.999237 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:52.000139 kubelet[2671]: E0319 11:49:52.000080 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:53.000332 kubelet[2671]: E0319 11:49:53.000266 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:54.000536 kubelet[2671]: E0319 11:49:54.000468 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:55.001249 kubelet[2671]: E0319 11:49:55.001192 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:56.002128 kubelet[2671]: E0319 11:49:56.002072 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:57.002925 kubelet[2671]: E0319 11:49:57.002865 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:58.003224 kubelet[2671]: E0319 11:49:58.003160 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:59.004030 kubelet[2671]: E0319 11:49:59.003963 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:49:59.938739 kubelet[2671]: E0319 11:49:59.938677 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:00.004743 kubelet[2671]: E0319 11:50:00.004675 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:01.004926 kubelet[2671]: E0319 11:50:01.004859 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:02.005924 kubelet[2671]: E0319 11:50:02.005859 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:03.006907 kubelet[2671]: E0319 11:50:03.006846 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:04.007667 kubelet[2671]: E0319 11:50:04.007624 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:05.007855 kubelet[2671]: E0319 11:50:05.007758 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:06.008415 kubelet[2671]: E0319 11:50:06.008353 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:07.008783 kubelet[2671]: E0319 11:50:07.008710 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:08.009282 kubelet[2671]: E0319 11:50:08.009217 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:09.010248 kubelet[2671]: E0319 11:50:09.010193 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:10.010799 kubelet[2671]: E0319 11:50:10.010727 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:11.011741 kubelet[2671]: E0319 11:50:11.011678 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:12.012455 kubelet[2671]: E0319 11:50:12.012393 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:12.677039 kubelet[2671]: I0319 11:50:12.676985 2671 topology_manager.go:215] "Topology Admit Handler" podUID="32cfbc13-59af-4d77-a6a6-c9e81eadff33" podNamespace="default" podName="test-pod-1" Mar 19 11:50:12.684130 systemd[1]: Created slice kubepods-besteffort-pod32cfbc13_59af_4d77_a6a6_c9e81eadff33.slice - libcontainer container kubepods-besteffort-pod32cfbc13_59af_4d77_a6a6_c9e81eadff33.slice. Mar 19 11:50:12.826441 kubelet[2671]: I0319 11:50:12.826297 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2043a9d0-0c63-4bb2-a3f6-2943ec5efa6b\" (UniqueName: \"kubernetes.io/nfs/32cfbc13-59af-4d77-a6a6-c9e81eadff33-pvc-2043a9d0-0c63-4bb2-a3f6-2943ec5efa6b\") pod \"test-pod-1\" (UID: \"32cfbc13-59af-4d77-a6a6-c9e81eadff33\") " pod="default/test-pod-1" Mar 19 11:50:12.826441 kubelet[2671]: I0319 11:50:12.826373 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2rc\" (UniqueName: \"kubernetes.io/projected/32cfbc13-59af-4d77-a6a6-c9e81eadff33-kube-api-access-9d2rc\") pod \"test-pod-1\" (UID: \"32cfbc13-59af-4d77-a6a6-c9e81eadff33\") " pod="default/test-pod-1" Mar 19 11:50:12.992133 kernel: FS-Cache: Loaded Mar 19 11:50:13.013292 kubelet[2671]: E0319 11:50:13.013241 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:13.087577 kernel: RPC: Registered named UNIX socket transport module. Mar 19 11:50:13.087698 kernel: RPC: Registered udp transport module. Mar 19 11:50:13.087718 kernel: RPC: Registered tcp transport module. Mar 19 11:50:13.091300 kernel: RPC: Registered tcp-with-tls transport module. Mar 19 11:50:13.091380 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 19 11:50:13.377419 kernel: NFS: Registering the id_resolver key type Mar 19 11:50:13.377551 kernel: Key type id_resolver registered Mar 19 11:50:13.377571 kernel: Key type id_legacy registered Mar 19 11:50:13.586992 nfsidmap[4473]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-c3eb9cf52f' Mar 19 11:50:13.608568 nfsidmap[4474]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-c3eb9cf52f' Mar 19 11:50:13.888602 containerd[1759]: time="2025-03-19T11:50:13.888284113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32cfbc13-59af-4d77-a6a6-c9e81eadff33,Namespace:default,Attempt:0,}" Mar 19 11:50:14.015982 kubelet[2671]: E0319 11:50:14.015885 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:14.025691 systemd-networkd[1602]: cali5ec59c6bf6e: Link UP Mar 19 11:50:14.027362 systemd-networkd[1602]: cali5ec59c6bf6e: Gained carrier Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.955 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-test--pod--1-eth0 default 32cfbc13-59af-4d77-a6a6-c9e81eadff33 1546 0 2025-03-19 11:49:26 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.19 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.955 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.984 [INFO][4487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" HandleID="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Workload="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.994 [INFO][4487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" HandleID="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Workload="10.200.8.19-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315280), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"test-pod-1", "timestamp":"2025-03-19 11:50:13.984366831 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.994 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.995 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.995 [INFO][4487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19' Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.996 [INFO][4487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:13.999 [INFO][4487] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.004 [INFO][4487] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.005 [INFO][4487] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.007 [INFO][4487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.007 [INFO][4487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.008 [INFO][4487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.012 [INFO][4487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.019 [INFO][4487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.132/26] block=192.168.41.128/26 handle="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.019 [INFO][4487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.132/26] handle="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" host="10.200.8.19" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.019 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.019 [INFO][4487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.132/26] IPv6=[] ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" HandleID="k8s-pod-network.be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Workload="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.038228 containerd[1759]: 2025-03-19 11:50:14.021 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"32cfbc13-59af-4d77-a6a6-c9e81eadff33", ResourceVersion:"1546", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 49, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:50:14.042232 containerd[1759]: 2025-03-19 11:50:14.021 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.132/32] ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.042232 containerd[1759]: 2025-03-19 11:50:14.022 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.042232 containerd[1759]: 2025-03-19 11:50:14.028 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.042232 containerd[1759]: 2025-03-19 11:50:14.028 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"32cfbc13-59af-4d77-a6a6-c9e81eadff33", ResourceVersion:"1546", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 49, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b6:77:62:b0:39:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:50:14.042232 containerd[1759]: 2025-03-19 11:50:14.037 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" Mar 19 11:50:14.071450 containerd[1759]: time="2025-03-19T11:50:14.071303904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:50:14.071450 containerd[1759]: time="2025-03-19T11:50:14.071347305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:50:14.071450 containerd[1759]: time="2025-03-19T11:50:14.071362605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:50:14.071843 containerd[1759]: time="2025-03-19T11:50:14.071483807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:50:14.100950 systemd[1]: Started cri-containerd-be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc.scope - libcontainer container be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc. Mar 19 11:50:14.141635 containerd[1759]: time="2025-03-19T11:50:14.141397411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32cfbc13-59af-4d77-a6a6-c9e81eadff33,Namespace:default,Attempt:0,} returns sandbox id \"be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc\"" Mar 19 11:50:14.143616 containerd[1759]: time="2025-03-19T11:50:14.143377743Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 11:50:14.496293 containerd[1759]: time="2025-03-19T11:50:14.496236116Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:50:14.499106 containerd[1759]: time="2025-03-19T11:50:14.499050260Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 19 11:50:14.501687 containerd[1759]: time="2025-03-19T11:50:14.501652802Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 358.236658ms" Mar 19 11:50:14.501687 containerd[1759]: time="2025-03-19T11:50:14.501689002Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 11:50:14.503825 containerd[1759]: time="2025-03-19T11:50:14.503795335Z" level=info msg="CreateContainer within sandbox \"be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 19 11:50:14.537553 containerd[1759]: time="2025-03-19T11:50:14.537510768Z" level=info msg="CreateContainer within sandbox \"be6ac40645917aeea104d7e90c0224e576a16cd5711444acbfee29d4b0f105dc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"97db2384e2dcd6821839672ffa221c64fa49b105ab99c31d9b96903c2e98e078\"" Mar 19 11:50:14.538450 containerd[1759]: time="2025-03-19T11:50:14.538076177Z" level=info msg="StartContainer for \"97db2384e2dcd6821839672ffa221c64fa49b105ab99c31d9b96903c2e98e078\"" Mar 19 11:50:14.568958 systemd[1]: Started cri-containerd-97db2384e2dcd6821839672ffa221c64fa49b105ab99c31d9b96903c2e98e078.scope - libcontainer container 97db2384e2dcd6821839672ffa221c64fa49b105ab99c31d9b96903c2e98e078. Mar 19 11:50:14.598876 containerd[1759]: time="2025-03-19T11:50:14.598799136Z" level=info msg="StartContainer for \"97db2384e2dcd6821839672ffa221c64fa49b105ab99c31d9b96903c2e98e078\" returns successfully" Mar 19 11:50:15.016930 kubelet[2671]: E0319 11:50:15.016862 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:15.312123 kubelet[2671]: I0319 11:50:15.311957 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=48.952572124 podStartE2EDuration="49.311937s" podCreationTimestamp="2025-03-19 11:49:26 +0000 UTC" firstStartedPulling="2025-03-19 11:50:14.143021237 +0000 UTC m=+114.572700436" lastFinishedPulling="2025-03-19 11:50:14.502386113 +0000 UTC m=+114.932065312" observedRunningTime="2025-03-19 11:50:15.311788998 +0000 UTC m=+115.741468197" watchObservedRunningTime="2025-03-19 11:50:15.311937 +0000 UTC m=+115.741616199" Mar 19 11:50:15.739987 systemd-networkd[1602]: cali5ec59c6bf6e: Gained IPv6LL Mar 19 11:50:16.017656 kubelet[2671]: E0319 11:50:16.017487 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:17.018142 kubelet[2671]: E0319 11:50:17.018086 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:18.018627 kubelet[2671]: E0319 11:50:18.018558 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:19.019154 kubelet[2671]: E0319 11:50:19.019089 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:19.938654 kubelet[2671]: E0319 11:50:19.938586 2671 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:20.019933 kubelet[2671]: E0319 11:50:20.019892 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:21.020263 kubelet[2671]: E0319 11:50:21.020195 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:22.021405 kubelet[2671]: E0319 11:50:22.021339 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:50:23.021844 kubelet[2671]: E0319 11:50:23.021756 2671 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"