Mar 17 17:58:48.051359 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:58:48.051397 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:48.051416 kernel: BIOS-provided physical RAM map: Mar 17 17:58:48.051428 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:58:48.051440 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 17 17:58:48.051453 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 17 17:58:48.051470 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 17 17:58:48.051485 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 17 17:58:48.051501 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 17 17:58:48.051513 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 17 17:58:48.051527 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 17 17:58:48.051541 kernel: printk: bootconsole [earlyser0] enabled Mar 17 17:58:48.051554 kernel: NX (Execute Disable) protection: active Mar 17 17:58:48.051569 kernel: APIC: Static calls initialized Mar 17 17:58:48.051591 kernel: efi: EFI v2.7 by Microsoft Mar 17 17:58:48.051607 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Mar 17 17:58:48.051622 kernel: random: crng init done Mar 17 17:58:48.051637 kernel: secureboot: Secure boot disabled Mar 17 17:58:48.051652 kernel: SMBIOS 3.1.0 present. Mar 17 17:58:48.051665 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 17 17:58:48.051681 kernel: Hypervisor detected: Microsoft Hyper-V Mar 17 17:58:48.051694 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 17 17:58:48.051705 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Mar 17 17:58:48.054743 kernel: Hyper-V: Nested features: 0x1e0101 Mar 17 17:58:48.054762 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 17 17:58:48.054770 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 17 17:58:48.054778 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:58:48.054789 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:58:48.054797 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 17 17:58:48.054806 kernel: tsc: Detected 2593.907 MHz processor Mar 17 17:58:48.054815 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:58:48.054823 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:58:48.054833 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 17 17:58:48.054844 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:58:48.054853 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:58:48.054860 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 17 17:58:48.054871 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 17 17:58:48.054879 kernel: Using GB pages for direct mapping Mar 17 17:58:48.054888 kernel: ACPI: Early table checksum verification disabled Mar 17 17:58:48.054897 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 17 17:58:48.054911 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054921 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054932 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 17 17:58:48.054939 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 17 17:58:48.054950 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054959 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054971 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054981 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054991 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.054999 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.055010 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:58:48.055018 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 17 17:58:48.055028 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 17 17:58:48.055037 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 17 17:58:48.055047 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 17 17:58:48.055058 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 17 17:58:48.055071 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 17 17:58:48.055080 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 17 17:58:48.055090 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 17 17:58:48.055099 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 17 17:58:48.055107 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 17 17:58:48.055117 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:58:48.055125 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:58:48.055136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 17 17:58:48.055143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 17 17:58:48.055156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 17 17:58:48.055164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 17 17:58:48.055175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 17 17:58:48.055183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 17 17:58:48.055194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 17 17:58:48.055202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 17 17:58:48.055213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 17 17:58:48.055221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 17 17:58:48.055234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 17 17:58:48.055242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 17 17:58:48.055251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 17 17:58:48.055260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 17 17:58:48.055268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 17 17:58:48.055278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 17 17:58:48.055286 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 17 17:58:48.055297 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 17 17:58:48.055305 kernel: Zone ranges: Mar 17 17:58:48.055318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:58:48.055325 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 17:58:48.055336 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:58:48.055344 kernel: Movable zone start for each node Mar 17 17:58:48.055355 kernel: Early memory node ranges Mar 17 17:58:48.055362 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:58:48.055372 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 17 17:58:48.055381 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 17 17:58:48.055389 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:58:48.055401 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 17 17:58:48.055412 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:58:48.055422 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:58:48.055431 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 17 17:58:48.055441 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 17 17:58:48.055451 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 17 17:58:48.055461 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:58:48.055469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:58:48.055480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:58:48.055490 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 17 17:58:48.055501 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:58:48.055509 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 17 17:58:48.055520 kernel: Booting paravirtualized kernel on Hyper-V Mar 17 17:58:48.055528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:58:48.055538 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:58:48.055547 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:58:48.055556 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:58:48.055565 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:58:48.055576 kernel: Hyper-V: PV spinlocks enabled Mar 17 17:58:48.055585 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:58:48.055595 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:48.055605 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:58:48.055613 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 17:58:48.055623 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:58:48.055630 kernel: Fallback order for Node 0: 0 Mar 17 17:58:48.055641 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 17 17:58:48.055651 kernel: Policy zone: Normal Mar 17 17:58:48.055669 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:58:48.055680 kernel: software IO TLB: area num 2. Mar 17 17:58:48.055691 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 312164K reserved, 0K cma-reserved) Mar 17 17:58:48.055702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:58:48.055710 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:58:48.055735 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:58:48.055746 kernel: Dynamic Preempt: voluntary Mar 17 17:58:48.055755 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:58:48.055769 kernel: rcu: RCU event tracing is enabled. Mar 17 17:58:48.055781 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:58:48.055794 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:58:48.055806 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:58:48.055816 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:58:48.055828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:58:48.055836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:58:48.055848 kernel: Using NULL legacy PIC Mar 17 17:58:48.055858 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 17 17:58:48.055870 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:58:48.055878 kernel: Console: colour dummy device 80x25 Mar 17 17:58:48.055889 kernel: printk: console [tty1] enabled Mar 17 17:58:48.055897 kernel: printk: console [ttyS0] enabled Mar 17 17:58:48.055908 kernel: printk: bootconsole [earlyser0] disabled Mar 17 17:58:48.055916 kernel: ACPI: Core revision 20230628 Mar 17 17:58:48.055928 kernel: Failed to register legacy timer interrupt Mar 17 17:58:48.055936 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:58:48.055946 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:58:48.055957 kernel: Hyper-V: Using IPI hypercalls Mar 17 17:58:48.055965 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 17 17:58:48.055977 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 17 17:58:48.055985 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 17 17:58:48.055996 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 17 17:58:48.056006 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 17 17:58:48.056016 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 17 17:58:48.056029 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 17 17:58:48.056042 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 17:58:48.056059 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 17:58:48.056071 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:58:48.056081 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:58:48.056090 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:58:48.056101 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:58:48.056112 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 17:58:48.056122 kernel: RETBleed: Vulnerable Mar 17 17:58:48.056133 kernel: Speculative Store Bypass: Vulnerable Mar 17 17:58:48.056142 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:58:48.056155 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:58:48.056165 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:58:48.056174 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:58:48.056185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:58:48.056196 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 17:58:48.056206 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 17:58:48.056218 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 17:58:48.056231 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:58:48.056243 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 17 17:58:48.056257 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 17 17:58:48.056270 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 17 17:58:48.056287 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 17 17:58:48.056301 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:58:48.056316 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:58:48.056330 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:58:48.056345 kernel: landlock: Up and running. Mar 17 17:58:48.056359 kernel: SELinux: Initializing. Mar 17 17:58:48.056374 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:58:48.056389 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:58:48.056404 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 17:58:48.056419 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:48.056433 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:48.056451 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:48.056465 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 17:58:48.056479 kernel: signal: max sigframe size: 3632 Mar 17 17:58:48.056494 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:58:48.056509 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:58:48.056523 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:58:48.056537 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:58:48.056551 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:58:48.056566 kernel: .... node #0, CPUs: #1 Mar 17 17:58:48.056584 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 17 17:58:48.056599 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:58:48.056614 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:58:48.056628 kernel: smpboot: Max logical packages: 1 Mar 17 17:58:48.056643 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 17 17:58:48.056657 kernel: devtmpfs: initialized Mar 17 17:58:48.056672 kernel: x86/mm: Memory block size: 128MB Mar 17 17:58:48.056687 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 17 17:58:48.056704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:58:48.057427 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:58:48.057447 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:58:48.057462 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:58:48.057476 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:58:48.057491 kernel: audit: type=2000 audit(1742234326.028:1): state=initialized audit_enabled=0 res=1 Mar 17 17:58:48.057505 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:58:48.057519 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:58:48.057534 kernel: cpuidle: using governor menu Mar 17 17:58:48.057552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:58:48.057568 kernel: dca service started, version 1.12.1 Mar 17 17:58:48.057583 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Mar 17 17:58:48.057596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:58:48.057610 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:58:48.057624 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:58:48.057638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:58:48.057652 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:58:48.057666 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:58:48.057685 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:58:48.057699 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:58:48.057725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:58:48.058481 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:58:48.058498 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:58:48.058514 kernel: ACPI: Interpreter enabled Mar 17 17:58:48.058528 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:58:48.058543 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:58:48.058559 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:58:48.058578 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 17 17:58:48.058593 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 17 17:58:48.058608 kernel: iommu: Default domain type: Translated Mar 17 17:58:48.058622 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:58:48.058637 kernel: efivars: Registered efivars operations Mar 17 17:58:48.058652 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:58:48.058666 kernel: PCI: System does not support PCI Mar 17 17:58:48.058680 kernel: vgaarb: loaded Mar 17 17:58:48.058695 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 17 17:58:48.058735 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:58:48.058748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:58:48.058760 kernel: pnp: PnP ACPI init Mar 17 17:58:48.058773 kernel: pnp: PnP ACPI: found 3 devices Mar 17 17:58:48.058786 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:58:48.058799 kernel: NET: Registered PF_INET protocol family Mar 17 17:58:48.058812 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:58:48.058826 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 17:58:48.058839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:58:48.058857 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:58:48.058871 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 17 17:58:48.058886 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 17:58:48.058902 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:58:48.058917 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:58:48.058931 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:58:48.058945 kernel: NET: Registered PF_XDP protocol family Mar 17 17:58:48.058961 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:58:48.058976 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 17:58:48.058993 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Mar 17 17:58:48.059006 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:58:48.059019 kernel: Initialise system trusted keyrings Mar 17 17:58:48.059031 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 17:58:48.059044 kernel: Key type asymmetric registered Mar 17 17:58:48.059059 kernel: Asymmetric key parser 'x509' registered Mar 17 17:58:48.059072 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:58:48.059085 kernel: io scheduler mq-deadline registered Mar 17 17:58:48.059098 kernel: io scheduler kyber registered Mar 17 17:58:48.059115 kernel: io scheduler bfq registered Mar 17 17:58:48.059130 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:58:48.059144 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:58:48.059159 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:58:48.059173 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 17:58:48.059186 kernel: i8042: PNP: No PS/2 controller found. Mar 17 17:58:48.059367 kernel: rtc_cmos 00:02: registered as rtc0 Mar 17 17:58:48.059491 kernel: rtc_cmos 00:02: setting system clock to 2025-03-17T17:58:47 UTC (1742234327) Mar 17 17:58:48.059612 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 17 17:58:48.059630 kernel: intel_pstate: CPU model not supported Mar 17 17:58:48.059644 kernel: efifb: probing for efifb Mar 17 17:58:48.059657 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:58:48.059672 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:58:48.059686 kernel: efifb: scrolling: redraw Mar 17 17:58:48.059699 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:58:48.060737 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:58:48.060754 kernel: fb0: EFI VGA frame buffer device Mar 17 17:58:48.060770 kernel: pstore: Using crash dump compression: deflate Mar 17 17:58:48.060779 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:58:48.060790 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:58:48.060801 kernel: Segment Routing with IPv6 Mar 17 17:58:48.060810 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:58:48.060821 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:58:48.060830 kernel: Key type dns_resolver registered Mar 17 17:58:48.060841 kernel: IPI shorthand broadcast: enabled Mar 17 17:58:48.060850 kernel: sched_clock: Marking stable (784002900, 42653800)->(1021588900, -194932200) Mar 17 17:58:48.060863 kernel: registered taskstats version 1 Mar 17 17:58:48.060872 kernel: Loading compiled-in X.509 certificates Mar 17 17:58:48.060883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:58:48.060891 kernel: Key type .fscrypt registered Mar 17 17:58:48.060902 kernel: Key type fscrypt-provisioning registered Mar 17 17:58:48.060911 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:58:48.060922 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:58:48.060931 kernel: ima: No architecture policies found Mar 17 17:58:48.060944 kernel: clk: Disabling unused clocks Mar 17 17:58:48.060952 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:58:48.060963 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:58:48.060972 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:58:48.060983 kernel: Run /init as init process Mar 17 17:58:48.060992 kernel: with arguments: Mar 17 17:58:48.061003 kernel: /init Mar 17 17:58:48.061011 kernel: with environment: Mar 17 17:58:48.061022 kernel: HOME=/ Mar 17 17:58:48.061030 kernel: TERM=linux Mar 17 17:58:48.061043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:58:48.061053 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:58:48.061067 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:58:48.061078 systemd[1]: Detected virtualization microsoft. Mar 17 17:58:48.061088 systemd[1]: Detected architecture x86-64. Mar 17 17:58:48.061098 systemd[1]: Running in initrd. Mar 17 17:58:48.061108 systemd[1]: No hostname configured, using default hostname. Mar 17 17:58:48.061123 systemd[1]: Hostname set to <localhost>. Mar 17 17:58:48.061133 systemd[1]: Initializing machine ID from random generator. Mar 17 17:58:48.061143 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:58:48.061154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:48.061164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:48.061177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:58:48.061186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:58:48.061198 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:58:48.061210 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:58:48.061222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:58:48.061233 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:58:48.061243 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:48.061254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:48.061264 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:58:48.061275 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:58:48.061286 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:58:48.061298 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:58:48.061307 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:58:48.061319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:58:48.061328 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:58:48.061340 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:58:48.061349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:48.061361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:48.061370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:48.061383 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:58:48.061393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:58:48.061404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:58:48.061414 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:58:48.061424 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:58:48.061435 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:58:48.061445 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:58:48.061478 systemd-journald[177]: Collecting audit messages is disabled. Mar 17 17:58:48.061506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:48.061517 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:58:48.061528 systemd-journald[177]: Journal started Mar 17 17:58:48.061556 systemd-journald[177]: Runtime Journal (/run/log/journal/9724ef1785d643f3b3e5928a95c9fbee) is 8M, max 158.8M, 150.8M free. Mar 17 17:58:48.072738 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:58:48.075075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:48.078212 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:58:48.080320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:48.095867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:48.098726 systemd-modules-load[179]: Inserted module 'overlay' Mar 17 17:58:48.108930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:58:48.117668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:58:48.146495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:48.153866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:58:48.153892 kernel: Bridge firewalling registered Mar 17 17:58:48.155756 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 17 17:58:48.161050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:48.166745 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:48.166984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:48.180961 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:58:48.188876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:58:48.193903 dracut-cmdline[206]: dracut-dracut-053 Mar 17 17:58:48.195662 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:48.214907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:58:48.233225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:48.238634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:48.252835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:58:48.293731 kernel: SCSI subsystem initialized Mar 17 17:58:48.298781 systemd-resolved[272]: Positive Trust Anchors: Mar 17 17:58:48.298797 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:58:48.298853 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:58:48.325976 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:58:48.302328 systemd-resolved[272]: Defaulting to hostname 'linux'. Mar 17 17:58:48.303298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:58:48.323552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:48.344738 kernel: iscsi: registered transport (tcp) Mar 17 17:58:48.368447 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:58:48.368539 kernel: QLogic iSCSI HBA Driver Mar 17 17:58:48.404484 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:58:48.413957 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:58:48.442602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:58:48.442689 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:58:48.445684 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:58:48.485747 kernel: raid6: avx512x4 gen() 18386 MB/s Mar 17 17:58:48.504725 kernel: raid6: avx512x2 gen() 18391 MB/s Mar 17 17:58:48.523726 kernel: raid6: avx512x1 gen() 18355 MB/s Mar 17 17:58:48.542727 kernel: raid6: avx2x4 gen() 18319 MB/s Mar 17 17:58:48.560726 kernel: raid6: avx2x2 gen() 18380 MB/s Mar 17 17:58:48.582066 kernel: raid6: avx2x1 gen() 13819 MB/s Mar 17 17:58:48.582110 kernel: raid6: using algorithm avx512x2 gen() 18391 MB/s Mar 17 17:58:48.602882 kernel: raid6: .... xor() 30253 MB/s, rmw enabled Mar 17 17:58:48.602911 kernel: raid6: using avx512x2 recovery algorithm Mar 17 17:58:48.624735 kernel: xor: automatically using best checksumming function avx Mar 17 17:58:48.764738 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:58:48.774207 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:58:48.783950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:48.801270 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 17 17:58:48.806311 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:48.819352 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:58:48.831710 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 17 17:58:48.857868 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:58:48.864927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:58:48.906101 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:48.920889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:58:48.951252 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:48.958342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:48.961404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:48.970107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:58:48.981937 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:58:49.005048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:49.011223 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:58:49.022620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:49.025992 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:49.036737 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:58:49.036783 kernel: AES CTR mode by8 optimization enabled Mar 17 17:58:49.036775 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:49.040080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:49.042323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:49.054563 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:49.063778 kernel: hv_vmbus: Vmbus version:5.2 Mar 17 17:58:49.071080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:49.086389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:49.089879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:49.098385 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:49.107432 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:58:49.107461 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Mar 17 17:58:49.109956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:49.120118 kernel: PTP clock support registered Mar 17 17:58:49.129739 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:58:49.139139 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:58:49.139459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:49.144331 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:58:49.154740 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:58:49.156031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:49.164808 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:58:49.170735 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:58:49.176518 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:58:49.176560 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:58:49.185538 kernel: scsi host0: storvsc_host_t Mar 17 17:58:49.185825 kernel: scsi host1: storvsc_host_t Mar 17 17:58:49.185993 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:58:49.189730 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:58:49.189773 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:58:49.198692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:58:49.198747 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:58:49.198765 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:58:49.196250 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:50.054414 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:58:50.045224 systemd-resolved[272]: Clock change detected. Flushing caches. Mar 17 17:58:50.072062 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:58:50.074185 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:58:50.074207 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:58:50.088862 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:58:50.102048 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:58:50.102243 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:58:50.102403 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:58:50.104927 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:58:50.105126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:58:50.105149 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:58:50.196749 kernel: hv_netvsc 6045bdf1-2ce2-6045-bdf1-2ce26045bdf1 eth0: VF slot 1 added Mar 17 17:58:50.203740 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:58:50.207740 kernel: hv_pci 74d567af-4241-435f-b823-3da8b6170c22: PCI VMBus probing: Using version 0x10004 Mar 17 17:58:50.248577 kernel: hv_pci 74d567af-4241-435f-b823-3da8b6170c22: PCI host bridge to bus 4241:00 Mar 17 17:58:50.248788 kernel: pci_bus 4241:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 17 17:58:50.248984 kernel: pci_bus 4241:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:58:50.249135 kernel: pci 4241:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 17 17:58:50.249339 kernel: pci 4241:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:58:50.249517 kernel: pci 4241:00:02.0: enabling Extended Tags Mar 17 17:58:50.249676 kernel: pci 4241:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4241:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 17 17:58:50.249887 kernel: pci_bus 4241:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:58:50.250044 kernel: pci 4241:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:58:50.409877 kernel: mlx5_core 4241:00:02.0: enabling device (0000 -> 0002) Mar 17 17:58:50.634258 kernel: mlx5_core 4241:00:02.0: firmware version: 14.30.5000 Mar 17 17:58:50.634483 kernel: hv_netvsc 6045bdf1-2ce2-6045-bdf1-2ce26045bdf1 eth0: VF registering: eth1 Mar 17 17:58:50.634638 kernel: mlx5_core 4241:00:02.0 eth1: joined to eth0 Mar 17 17:58:50.634849 kernel: mlx5_core 4241:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 17 17:58:50.641740 kernel: mlx5_core 4241:00:02.0 enP16961s1: renamed from eth1 Mar 17 17:58:50.696642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:58:50.801959 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (458) Mar 17 17:58:50.809751 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (449) Mar 17 17:58:50.815334 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:58:50.856149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:58:50.862097 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:58:50.878612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:58:50.887924 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:58:50.901745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:58:50.909737 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:58:51.918838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:58:51.920214 disk-uuid[609]: The operation has completed successfully. Mar 17 17:58:51.997140 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:58:51.997264 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:58:52.054879 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:58:52.063973 sh[695]: Success Mar 17 17:58:52.098753 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:58:52.350293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:58:52.360846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:58:52.366176 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:58:52.384742 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:58:52.384798 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:52.389671 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:58:52.392246 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:58:52.394522 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:58:52.778431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:58:52.783427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:58:52.797902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:58:52.803908 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:58:52.818934 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:52.818989 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:52.819004 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:58:52.843751 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:58:52.859403 kernel: BTRFS info (device sda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:52.858791 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:58:52.867674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:58:52.876928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:58:52.910887 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:52.927907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:58:52.951844 systemd-networkd[880]: lo: Link UP Mar 17 17:58:52.951853 systemd-networkd[880]: lo: Gained carrier Mar 17 17:58:52.954105 systemd-networkd[880]: Enumeration completed Mar 17 17:58:52.954321 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:58:52.956410 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:52.956415 systemd-networkd[880]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:58:52.958487 systemd[1]: Reached target network.target - Network. Mar 17 17:58:53.023751 kernel: mlx5_core 4241:00:02.0 enP16961s1: Link up Mar 17 17:58:53.057782 kernel: hv_netvsc 6045bdf1-2ce2-6045-bdf1-2ce26045bdf1 eth0: Data path switched to VF: enP16961s1 Mar 17 17:58:53.058140 systemd-networkd[880]: enP16961s1: Link UP Mar 17 17:58:53.058264 systemd-networkd[880]: eth0: Link UP Mar 17 17:58:53.058411 systemd-networkd[880]: eth0: Gained carrier Mar 17 17:58:53.058422 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:53.062930 systemd-networkd[880]: enP16961s1: Gained carrier Mar 17 17:58:53.088768 systemd-networkd[880]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 Mar 17 17:58:53.967121 ignition[825]: Ignition 2.20.0 Mar 17 17:58:53.967134 ignition[825]: Stage: fetch-offline Mar 17 17:58:53.969358 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:53.967180 ignition[825]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:53.967190 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:53.967295 ignition[825]: parsed url from cmdline: "" Mar 17 17:58:53.980855 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:58:53.967300 ignition[825]: no config URL provided Mar 17 17:58:53.967307 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:53.967316 ignition[825]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:53.967323 ignition[825]: failed to fetch config: resource requires networking Mar 17 17:58:53.967601 ignition[825]: Ignition finished successfully Mar 17 17:58:53.998423 ignition[890]: Ignition 2.20.0 Mar 17 17:58:53.998434 ignition[890]: Stage: fetch Mar 17 17:58:53.998657 ignition[890]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:53.998667 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:54.000424 ignition[890]: parsed url from cmdline: "" Mar 17 17:58:54.000430 ignition[890]: no config URL provided Mar 17 17:58:54.000438 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:54.000450 ignition[890]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:54.000482 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:58:54.100153 ignition[890]: GET result: OK Mar 17 17:58:54.100258 ignition[890]: config has been read from IMDS userdata Mar 17 17:58:54.100276 ignition[890]: parsing config with SHA512: e9b6ea454b0ac7aea2bb12ab6241d78d8fe183ec5530e61621cf2625209bc27880f81395834b4b44f3027dbc6f90c4eac3b2d634595f11c3988bf58c0f6a7391 Mar 17 17:58:54.104923 unknown[890]: fetched base config from "system" Mar 17 17:58:54.104937 unknown[890]: fetched base config from "system" Mar 17 17:58:54.104946 unknown[890]: fetched user config from "azure" Mar 17 17:58:54.110729 ignition[890]: fetch: fetch complete Mar 17 17:58:54.110744 ignition[890]: fetch: fetch passed Mar 17 17:58:54.112236 ignition[890]: Ignition finished successfully Mar 17 17:58:54.117168 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:58:54.125915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:58:54.144310 ignition[897]: Ignition 2.20.0 Mar 17 17:58:54.144322 ignition[897]: Stage: kargs Mar 17 17:58:54.144535 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:54.144548 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:54.148281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:58:54.145249 ignition[897]: kargs: kargs passed Mar 17 17:58:54.145292 ignition[897]: Ignition finished successfully Mar 17 17:58:54.161906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:58:54.173817 ignition[903]: Ignition 2.20.0 Mar 17 17:58:54.173829 ignition[903]: Stage: disks Mar 17 17:58:54.174057 ignition[903]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:54.174071 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:54.182029 ignition[903]: disks: disks passed Mar 17 17:58:54.183486 ignition[903]: Ignition finished successfully Mar 17 17:58:54.185950 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:58:54.188810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:54.193517 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:58:54.195173 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:58:54.201301 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:58:54.212885 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:58:54.225884 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:58:54.299138 systemd-fsck[911]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:58:54.303271 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:58:54.317169 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:58:54.405086 kernel: EXT4-fs (sda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:58:54.405834 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:58:54.408369 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:58:54.451814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:54.457790 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:58:54.463907 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (922) Mar 17 17:58:54.472975 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:54.473017 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:54.473036 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:58:54.473282 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:58:54.483746 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:58:54.482901 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:58:54.482947 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:54.494543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:54.496871 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:58:54.507874 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:58:54.807511 systemd-networkd[880]: enP16961s1: Gained IPv6LL Mar 17 17:58:55.061016 systemd-networkd[880]: eth0: Gained IPv6LL Mar 17 17:58:55.137210 coreos-metadata[924]: Mar 17 17:58:55.137 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:58:55.141070 coreos-metadata[924]: Mar 17 17:58:55.140 INFO Fetch successful Mar 17 17:58:55.141070 coreos-metadata[924]: Mar 17 17:58:55.140 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:58:55.148775 coreos-metadata[924]: Mar 17 17:58:55.148 INFO Fetch successful Mar 17 17:58:55.166289 coreos-metadata[924]: Mar 17 17:58:55.166 INFO wrote hostname ci-4230.1.0-a-2af36eae3a to /sysroot/etc/hostname Mar 17 17:58:55.168019 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:55.250352 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:58:55.290856 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:58:55.301267 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:58:55.306254 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:58:56.326258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:56.333821 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:58:56.340898 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:58:56.351307 kernel: BTRFS info (device sda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:56.352062 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:58:56.378127 ignition[1040]: INFO : Ignition 2.20.0 Mar 17 17:58:56.378127 ignition[1040]: INFO : Stage: mount Mar 17 17:58:56.389025 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:56.389025 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:56.389025 ignition[1040]: INFO : mount: mount passed Mar 17 17:58:56.389025 ignition[1040]: INFO : Ignition finished successfully Mar 17 17:58:56.381840 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:58:56.391190 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:58:56.407888 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:58:56.416709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:56.434738 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) Mar 17 17:58:56.434779 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:56.438737 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:56.442374 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:58:56.448738 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:58:56.449851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:56.473384 ignition[1069]: INFO : Ignition 2.20.0 Mar 17 17:58:56.473384 ignition[1069]: INFO : Stage: files Mar 17 17:58:56.477516 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:56.477516 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:56.477516 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:58:56.490087 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:58:56.490087 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:58:56.584569 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:58:56.589109 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:58:56.589109 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:58:56.585086 unknown[1069]: wrote ssh authorized keys file for user: core Mar 17 17:58:56.621161 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:58:56.625567 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:58:57.132193 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:58:57.412523 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:58:57.417784 ignition[1069]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:57.417784 ignition[1069]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:57.425748 ignition[1069]: INFO : files: files passed Mar 17 17:58:57.425748 ignition[1069]: INFO : Ignition finished successfully Mar 17 17:58:57.430658 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:58:57.439934 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:58:57.445849 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:58:57.448704 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:58:57.448803 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:58:57.472948 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:57.472948 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:57.477342 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:57.476868 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:57.489785 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:58:57.498885 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:58:57.522618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:58:57.522743 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:58:57.531128 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:58:57.533570 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:58:57.540295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:58:57.545907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:58:57.559549 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:57.570872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:58:57.585689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:57.586003 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:57.586533 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:58:57.595896 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:58:57.596034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:57.603580 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:58:57.608147 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:58:57.612834 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:58:57.617585 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:57.624897 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:57.632329 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:58:57.636988 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:57.637162 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:58:57.637501 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:58:57.638213 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:58:57.638557 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:58:57.638681 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:57.639817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:57.640187 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:57.640522 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:58:57.655636 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:57.658418 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:58:57.658558 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:57.663441 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:58:57.663560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:57.680598 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:58:57.680770 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:58:57.695163 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:58:57.695323 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:57.715059 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:58:57.717382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:58:57.719839 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:57.727906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:58:57.729929 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:58:57.744062 ignition[1121]: INFO : Ignition 2.20.0 Mar 17 17:58:57.744062 ignition[1121]: INFO : Stage: umount Mar 17 17:58:57.744062 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:57.744062 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:58:57.730100 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:57.753290 ignition[1121]: INFO : umount: umount passed Mar 17 17:58:57.753290 ignition[1121]: INFO : Ignition finished successfully Mar 17 17:58:57.733032 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:58:57.733179 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:58:57.750211 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:58:57.750313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:58:57.770145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:58:57.770282 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:58:57.779632 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:58:57.779704 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:58:57.786557 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:58:57.786621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:58:57.794699 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:58:57.794773 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:58:57.801615 systemd[1]: Stopped target network.target - Network. Mar 17 17:58:57.803676 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:58:57.803748 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:57.803855 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:58:57.816401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:58:57.818973 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:57.822184 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:58:57.827575 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:58:57.833829 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:58:57.833890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:58:57.837989 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:58:57.838035 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:58:57.842520 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:58:57.842585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:58:57.846504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:58:57.846560 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:58:57.849146 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:58:57.853415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:58:57.856829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:58:57.873224 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:58:57.873359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:58:57.879456 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:58:57.879674 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:58:57.879786 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:58:57.887474 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:58:57.888093 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:58:57.888172 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:57.899687 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:58:57.906024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:58:57.906088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:57.908925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:58:57.908981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:57.913492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:58:57.913545 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:57.916503 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:58:57.916558 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:57.921842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:57.930616 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:58:57.930677 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:57.943027 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:58:57.943171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:57.950838 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:58:57.950926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:57.951819 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:58:57.951854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:57.952129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:58:57.952171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:58:57.955901 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:58:57.955944 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:58:57.956585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:57.956622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:57.959871 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:58:57.984202 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:58:57.984268 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:58.007587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:58.017505 kernel: hv_netvsc 6045bdf1-2ce2-6045-bdf1-2ce26045bdf1 eth0: Data path switched from VF: enP16961s1 Mar 17 17:58:58.007666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:58.012817 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:58:58.012883 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:58.022549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:58:58.022637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:58:58.043386 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:58:58.043504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:58:58.172364 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:58:58.172497 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:58:58.177305 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:58:58.181732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:58:58.181802 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:58.198905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:58:58.227709 systemd[1]: Switching root. Mar 17 17:58:58.296708 systemd-journald[177]: Journal stopped Mar 17 17:59:03.096381 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 17 17:59:03.096419 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:59:03.096434 kernel: SELinux: policy capability open_perms=1 Mar 17 17:59:03.096443 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:59:03.096453 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:59:03.096461 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:59:03.096471 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:59:03.096484 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:59:03.096493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:59:03.096504 kernel: audit: type=1403 audit(1742234339.345:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:59:03.096514 systemd[1]: Successfully loaded SELinux policy in 160.633ms. Mar 17 17:59:03.096527 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.783ms. Mar 17 17:59:03.096539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:59:03.096550 systemd[1]: Detected virtualization microsoft. Mar 17 17:59:03.096565 systemd[1]: Detected architecture x86-64. Mar 17 17:59:03.096574 systemd[1]: Detected first boot. Mar 17 17:59:03.096584 systemd[1]: Hostname set to <ci-4230.1.0-a-2af36eae3a>. Mar 17 17:59:03.096593 systemd[1]: Initializing machine ID from random generator. Mar 17 17:59:03.096603 zram_generator::config[1165]: No configuration found. Mar 17 17:59:03.096618 kernel: Guest personality initialized and is inactive Mar 17 17:59:03.096629 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Mar 17 17:59:03.096639 kernel: Initialized host personality Mar 17 17:59:03.096651 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:59:03.096661 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:59:03.096674 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:59:03.096684 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:59:03.096696 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:59:03.096711 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:59:03.096735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:59:03.096749 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:59:03.096763 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:59:03.096777 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:59:03.096792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:59:03.096807 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:59:03.096826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:59:03.096840 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:59:03.096854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:03.096870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:03.096886 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:59:03.096903 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:59:03.096926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:59:03.096942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:59:03.096958 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:59:03.096979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:03.096994 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:59:03.097010 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:59:03.097025 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:59:03.097041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:59:03.097058 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:03.097073 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:59:03.097093 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:59:03.097108 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:59:03.097123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:59:03.097139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:59:03.097155 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:59:03.097173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:03.097192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:03.097207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:03.097224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:59:03.097241 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:59:03.097260 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:59:03.097276 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:59:03.097292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:03.097313 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:59:03.097332 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:59:03.097349 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:59:03.097367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:59:03.097383 systemd[1]: Reached target machines.target - Containers. Mar 17 17:59:03.097400 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:59:03.097418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:03.097434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:59:03.097455 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:59:03.097472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:03.097491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:03.097509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:03.097526 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:59:03.097543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:03.097562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:59:03.097579 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:59:03.097600 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:59:03.097618 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:59:03.097634 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:59:03.097652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:03.097669 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:59:03.097685 kernel: loop: module loaded Mar 17 17:59:03.097703 kernel: fuse: init (API version 7.39) Mar 17 17:59:03.098239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:59:03.098277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:59:03.098295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:59:03.098312 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:59:03.098330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:59:03.098347 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:59:03.098363 systemd[1]: Stopped verity-setup.service. Mar 17 17:59:03.098382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:03.098400 kernel: ACPI: bus type drm_connector registered Mar 17 17:59:03.098421 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:59:03.098440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:59:03.098491 systemd-journald[1251]: Collecting audit messages is disabled. Mar 17 17:59:03.098527 systemd-journald[1251]: Journal started Mar 17 17:59:03.098565 systemd-journald[1251]: Runtime Journal (/run/log/journal/4fc2aa880c58413b8108821f64bde770) is 8M, max 158.8M, 150.8M free. Mar 17 17:59:02.476265 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:59:02.484597 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:59:02.485011 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:59:03.106474 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:59:03.110079 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:59:03.112852 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:59:03.115745 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:59:03.118487 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:59:03.121005 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:59:03.124615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:03.128054 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:59:03.128239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:59:03.131171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:03.131349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:03.134385 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:03.134567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:03.137264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:03.137446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:03.140402 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:59:03.140579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:59:03.143225 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:03.143402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:03.146089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:03.148883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:59:03.152027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:59:03.164093 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:59:03.176844 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:59:03.187027 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:59:03.190096 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:59:03.190147 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:59:03.194860 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:59:03.203952 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:59:03.217061 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:59:03.221778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:03.224850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:59:03.234820 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:59:03.237792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:03.239302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:59:03.242056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:03.247887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:59:03.256212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:59:03.261122 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:59:03.270493 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:59:03.273946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:59:03.277675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:03.281171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:59:03.291823 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:59:03.295423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:59:03.301232 systemd-journald[1251]: Time spent on flushing to /var/log/journal/4fc2aa880c58413b8108821f64bde770 is 32.253ms for 959 entries. Mar 17 17:59:03.301232 systemd-journald[1251]: System Journal (/var/log/journal/4fc2aa880c58413b8108821f64bde770) is 8M, max 2.6G, 2.6G free. Mar 17 17:59:03.364886 systemd-journald[1251]: Received client request to flush runtime journal. Mar 17 17:59:03.364965 kernel: loop0: detected capacity change from 0 to 138176 Mar 17 17:59:03.305932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:59:03.318884 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:59:03.325873 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:59:03.351306 udevadm[1317]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:59:03.367921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:59:03.378662 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:03.404873 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:59:03.486519 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:59:03.622028 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:59:03.633885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:59:03.763909 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 17 17:59:03.763940 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 17 17:59:03.769305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:03.902749 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:59:03.966743 kernel: loop1: detected capacity change from 0 to 147912 Mar 17 17:59:04.530743 kernel: loop2: detected capacity change from 0 to 28272 Mar 17 17:59:04.907158 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:59:04.915046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:04.941668 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 17 17:59:05.042744 kernel: loop3: detected capacity change from 0 to 205544 Mar 17 17:59:05.084795 kernel: loop4: detected capacity change from 0 to 138176 Mar 17 17:59:05.100746 kernel: loop5: detected capacity change from 0 to 147912 Mar 17 17:59:05.116747 kernel: loop6: detected capacity change from 0 to 28272 Mar 17 17:59:05.125745 kernel: loop7: detected capacity change from 0 to 205544 Mar 17 17:59:05.135192 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 17 17:59:05.135795 (sd-merge)[1335]: Merged extensions into '/usr'. Mar 17 17:59:05.139659 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:59:05.139678 systemd[1]: Reloading... Mar 17 17:59:05.237415 zram_generator::config[1374]: No configuration found. Mar 17 17:59:05.415746 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:59:05.433836 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 17:59:05.436381 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 17:59:05.442766 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 17:59:05.447432 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:59:05.455944 kernel: hv_vmbus: registering driver hv_balloon Mar 17 17:59:05.456013 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:59:05.456045 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 17:59:05.700879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:05.773753 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1356) Mar 17 17:59:05.923178 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:59:05.923960 systemd[1]: Reloading finished in 783 ms. Mar 17 17:59:05.937044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:05.952219 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:59:05.971746 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 17 17:59:06.026455 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:59:06.041849 systemd[1]: Starting ensure-sysext.service... Mar 17 17:59:06.046893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:59:06.054895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:59:06.058890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:59:06.071815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:06.086523 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:59:06.097298 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:59:06.097686 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:59:06.100175 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:59:06.103355 systemd[1]: Reload requested from client PID 1519 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:59:06.103367 systemd[1]: Reloading... Mar 17 17:59:06.105228 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:59:06.105809 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Mar 17 17:59:06.106000 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Mar 17 17:59:06.115029 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:06.115043 systemd-tmpfiles[1522]: Skipping /boot Mar 17 17:59:06.131133 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:06.131280 systemd-tmpfiles[1522]: Skipping /boot Mar 17 17:59:06.189769 zram_generator::config[1555]: No configuration found. Mar 17 17:59:06.212655 lvm[1527]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:06.351389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:06.465735 systemd[1]: Reloading finished in 361 ms. Mar 17 17:59:06.494578 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:59:06.498069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:06.501353 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:59:06.511304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:06.518152 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:59:06.544055 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:59:06.548709 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:59:06.554611 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:59:06.561980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:59:06.565201 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:06.574391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:59:06.586033 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:59:06.595109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.595359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:06.602696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:06.610013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:06.619880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:06.625184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:06.625351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:06.625490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.629017 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:59:06.644635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:06.644873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:06.658382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:06.663790 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:59:06.677022 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:59:06.685596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:06.686643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:06.690595 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:06.691966 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:06.706870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:06.718259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.721231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:06.727002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:06.730080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:06.730241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:06.730425 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:06.730549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.731884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:06.732120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:06.748835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.749193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:06.754806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:06.772494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:06.776483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:06.782839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:06.785607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:06.785821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:06.786070 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:59:06.790169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:06.792958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:06.793201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:06.802021 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:06.802254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:06.808336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:06.808565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:06.812309 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:06.812455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:06.817914 systemd[1]: Finished ensure-sysext.service. Mar 17 17:59:06.823441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:06.823482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:06.851618 augenrules[1676]: No rules Mar 17 17:59:06.853017 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:59:06.853288 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:59:06.888992 systemd-resolved[1626]: Positive Trust Anchors: Mar 17 17:59:06.889006 systemd-resolved[1626]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:59:06.889053 systemd-resolved[1626]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:59:06.893166 systemd-resolved[1626]: Using system hostname 'ci-4230.1.0-a-2af36eae3a'. Mar 17 17:59:06.894585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:59:06.897941 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:06.923859 systemd-networkd[1521]: lo: Link UP Mar 17 17:59:06.923870 systemd-networkd[1521]: lo: Gained carrier Mar 17 17:59:06.926551 systemd-networkd[1521]: Enumeration completed Mar 17 17:59:06.926680 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:59:06.927032 systemd-networkd[1521]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:06.927038 systemd-networkd[1521]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:06.930615 systemd[1]: Reached target network.target - Network. Mar 17 17:59:06.938062 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:59:06.942641 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:59:06.970980 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:59:06.986757 kernel: mlx5_core 4241:00:02.0 enP16961s1: Link up Mar 17 17:59:07.006212 kernel: hv_netvsc 6045bdf1-2ce2-6045-bdf1-2ce26045bdf1 eth0: Data path switched to VF: enP16961s1 Mar 17 17:59:07.007470 systemd-networkd[1521]: enP16961s1: Link UP Mar 17 17:59:07.007619 systemd-networkd[1521]: eth0: Link UP Mar 17 17:59:07.007626 systemd-networkd[1521]: eth0: Gained carrier Mar 17 17:59:07.007650 systemd-networkd[1521]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:07.010110 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:59:07.014118 systemd-networkd[1521]: enP16961s1: Gained carrier Mar 17 17:59:07.046784 systemd-networkd[1521]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 Mar 17 17:59:08.244881 systemd-networkd[1521]: eth0: Gained IPv6LL Mar 17 17:59:08.248000 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:59:08.251577 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:59:08.431735 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:59:08.435002 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:59:08.820931 systemd-networkd[1521]: enP16961s1: Gained IPv6LL Mar 17 17:59:10.174920 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:59:10.185111 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:59:10.192952 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:59:10.203803 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:59:10.206773 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:59:10.209437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:59:10.212299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:59:10.215249 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:59:10.218045 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:59:10.220929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:59:10.223795 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:59:10.223840 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:59:10.225925 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:59:10.229179 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:59:10.232999 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:59:10.238065 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:59:10.241183 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:59:10.244231 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:59:10.255334 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:59:10.258421 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:59:10.261790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:59:10.264469 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:59:10.266684 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:59:10.268838 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:10.268871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:10.276822 systemd[1]: Starting chronyd.service - NTP client/server... Mar 17 17:59:10.281858 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:59:10.290912 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:59:10.294891 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:59:10.298875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:59:10.306922 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:59:10.309418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:59:10.309498 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 17 17:59:10.310927 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 17 17:59:10.326355 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 17 17:59:10.331828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:10.335694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:59:10.340952 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:59:10.345352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:59:10.356893 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:59:10.369103 KVP[1703]: KVP starting; pid is:1703 Mar 17 17:59:10.370212 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:59:10.376435 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:59:10.377132 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:59:10.380742 kernel: hv_utils: KVP IC version 4.0 Mar 17 17:59:10.380959 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:59:10.381071 KVP[1703]: KVP LIC Version: 3.1 Mar 17 17:59:10.383544 jq[1699]: false Mar 17 17:59:10.391839 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:59:10.400250 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:59:10.401014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:59:10.403534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:59:10.404333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:59:10.406598 jq[1714]: true Mar 17 17:59:10.436390 jq[1720]: true Mar 17 17:59:10.448529 extend-filesystems[1702]: Found loop4 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found loop5 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found loop6 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found loop7 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda1 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda2 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda3 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found usr Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda4 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda6 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda7 Mar 17 17:59:10.451917 extend-filesystems[1702]: Found sda9 Mar 17 17:59:10.451917 extend-filesystems[1702]: Checking size of /dev/sda9 Mar 17 17:59:10.504232 chronyd[1743]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 17 17:59:10.455559 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:59:10.458980 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:59:10.484405 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:59:10.488443 (chronyd)[1694]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 17 17:59:10.514182 chronyd[1743]: Timezone right/UTC failed leap second check, ignoring Mar 17 17:59:10.514380 chronyd[1743]: Loaded seccomp filter (level 2) Mar 17 17:59:10.517515 systemd[1]: Started chronyd.service - NTP client/server. Mar 17 17:59:10.523710 update_engine[1712]: I20250317 17:59:10.523612 1712 main.cc:92] Flatcar Update Engine starting Mar 17 17:59:10.531586 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:59:10.568699 extend-filesystems[1702]: Old size kept for /dev/sda9 Mar 17 17:59:10.571067 extend-filesystems[1702]: Found sr0 Mar 17 17:59:10.576667 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:59:10.576962 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:59:10.600192 systemd-logind[1710]: New seat seat0. Mar 17 17:59:10.603131 systemd-logind[1710]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:59:10.603339 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:59:10.679326 dbus-daemon[1697]: [system] SELinux support is enabled Mar 17 17:59:10.682410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:59:10.693291 update_engine[1712]: I20250317 17:59:10.692147 1712 update_check_scheduler.cc:74] Next update check in 7m23s Mar 17 17:59:10.693921 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:59:10.693957 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:59:10.699435 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:59:10.703656 dbus-daemon[1697]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:59:10.704084 bash[1762]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:59:10.699462 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:59:10.707642 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:59:10.712620 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:59:10.718497 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:59:10.726430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:59:10.769100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1765) Mar 17 17:59:10.796990 coreos-metadata[1696]: Mar 17 17:59:10.796 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:59:10.800591 coreos-metadata[1696]: Mar 17 17:59:10.800 INFO Fetch successful Mar 17 17:59:10.800805 coreos-metadata[1696]: Mar 17 17:59:10.800 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 17:59:10.807574 coreos-metadata[1696]: Mar 17 17:59:10.807 INFO Fetch successful Mar 17 17:59:10.807574 coreos-metadata[1696]: Mar 17 17:59:10.807 INFO Fetching http://168.63.129.16/machine/63cb3da4-3a37-4aa7-8ded-cf0ce9b8e187/868da744%2D6863%2D4646%2D9717%2D69c4db3ac267.%5Fci%2D4230.1.0%2Da%2D2af36eae3a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 17:59:10.810804 coreos-metadata[1696]: Mar 17 17:59:10.809 INFO Fetch successful Mar 17 17:59:10.811005 coreos-metadata[1696]: Mar 17 17:59:10.810 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:59:10.822884 coreos-metadata[1696]: Mar 17 17:59:10.821 INFO Fetch successful Mar 17 17:59:10.921889 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:59:10.927224 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:59:11.107167 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:59:11.137243 sshd_keygen[1732]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:59:11.166222 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:59:11.176047 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:59:11.185961 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 17 17:59:11.195848 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:59:11.196279 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:59:11.211084 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:59:11.223292 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 17 17:59:11.270092 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:59:11.280129 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:59:11.291022 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:59:11.294225 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:59:11.779620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:11.784387 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:11.873445 containerd[1736]: time="2025-03-17T17:59:11.872763300Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:59:11.907853 containerd[1736]: time="2025-03-17T17:59:11.907700500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.910442 containerd[1736]: time="2025-03-17T17:59:11.910401600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:11.910556 containerd[1736]: time="2025-03-17T17:59:11.910541000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:59:11.910626 containerd[1736]: time="2025-03-17T17:59:11.910612300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:59:11.910905 containerd[1736]: time="2025-03-17T17:59:11.910881400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:59:11.910997 containerd[1736]: time="2025-03-17T17:59:11.910982300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911144 containerd[1736]: time="2025-03-17T17:59:11.911125300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911205 containerd[1736]: time="2025-03-17T17:59:11.911192800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911545 containerd[1736]: time="2025-03-17T17:59:11.911508000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911545 containerd[1736]: time="2025-03-17T17:59:11.911531100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911545 containerd[1736]: time="2025-03-17T17:59:11.911550000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911699 containerd[1736]: time="2025-03-17T17:59:11.911563500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.911699 containerd[1736]: time="2025-03-17T17:59:11.911670700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.912335 containerd[1736]: time="2025-03-17T17:59:11.911933900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:11.912335 containerd[1736]: time="2025-03-17T17:59:11.912143400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:11.912335 containerd[1736]: time="2025-03-17T17:59:11.912163900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:59:11.912335 containerd[1736]: time="2025-03-17T17:59:11.912267900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:59:11.912335 containerd[1736]: time="2025-03-17T17:59:11.912319700Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:59:11.927668 containerd[1736]: time="2025-03-17T17:59:11.927627000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:59:11.927806 containerd[1736]: time="2025-03-17T17:59:11.927692100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:59:11.927806 containerd[1736]: time="2025-03-17T17:59:11.927713500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:59:11.927806 containerd[1736]: time="2025-03-17T17:59:11.927743300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:59:11.927806 containerd[1736]: time="2025-03-17T17:59:11.927761900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:59:11.927951 containerd[1736]: time="2025-03-17T17:59:11.927937300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:59:11.930005 containerd[1736]: time="2025-03-17T17:59:11.929810900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:59:11.930005 containerd[1736]: time="2025-03-17T17:59:11.929969200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:59:11.930005 containerd[1736]: time="2025-03-17T17:59:11.929993700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930016400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930036800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930063400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930079900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930099000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930119300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930139700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930157400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930174300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:59:11.930210 containerd[1736]: time="2025-03-17T17:59:11.930201300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930219300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930237500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930255600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930273100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930291300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930307500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930325500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930343800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930364000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930380300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930397000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930414300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930435200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930464900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.930542 containerd[1736]: time="2025-03-17T17:59:11.930489000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930505800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930582800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930608600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930624500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930642600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930655600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930671800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930684800Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:59:11.931499 containerd[1736]: time="2025-03-17T17:59:11.930699000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:59:11.932905 containerd[1736]: time="2025-03-17T17:59:11.932846600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:59:11.933139 containerd[1736]: time="2025-03-17T17:59:11.932901700Z" level=info msg="Connect containerd service" Mar 17 17:59:11.933139 containerd[1736]: time="2025-03-17T17:59:11.932953200Z" level=info msg="using legacy CRI server" Mar 17 17:59:11.933139 containerd[1736]: time="2025-03-17T17:59:11.932964700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:59:11.933242 containerd[1736]: time="2025-03-17T17:59:11.933208100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:59:11.933828 containerd[1736]: time="2025-03-17T17:59:11.933801400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934007900Z" level=info msg="Start subscribing containerd event" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934058600Z" level=info msg="Start recovering state" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934127300Z" level=info msg="Start event monitor" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934142100Z" level=info msg="Start snapshots syncer" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934154400Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:59:11.934313 containerd[1736]: time="2025-03-17T17:59:11.934164500Z" level=info msg="Start streaming server" Mar 17 17:59:11.935188 containerd[1736]: time="2025-03-17T17:59:11.935163000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:59:11.935274 containerd[1736]: time="2025-03-17T17:59:11.935253400Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:59:11.935415 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:59:11.938593 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:59:11.941245 systemd[1]: Startup finished in 903ms (firmware) + 33.517s (loader) + 921ms (kernel) + 10.651s (initrd) + 12.754s (userspace) = 58.748s. Mar 17 17:59:11.946097 containerd[1736]: time="2025-03-17T17:59:11.946071400Z" level=info msg="containerd successfully booted in 0.075190s" Mar 17 17:59:12.338032 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 17:59:12.345468 login[1863]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 17:59:12.348595 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:59:12.356410 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:59:12.367482 systemd-logind[1710]: New session 1 of user core. Mar 17 17:59:12.375152 systemd-logind[1710]: New session 2 of user core. Mar 17 17:59:12.383232 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:59:12.392683 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:59:12.396163 (systemd)[1889]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:59:12.399947 systemd-logind[1710]: New session c1 of user core. Mar 17 17:59:12.602814 kubelet[1873]: E0317 17:59:12.601747 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:12.605999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:12.606198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:12.607228 systemd[1]: kubelet.service: Consumed 894ms CPU time, 238.7M memory peak. Mar 17 17:59:12.683837 systemd[1889]: Queued start job for default target default.target. Mar 17 17:59:12.689962 systemd[1889]: Created slice app.slice - User Application Slice. Mar 17 17:59:12.690622 systemd[1889]: Reached target paths.target - Paths. Mar 17 17:59:12.690710 systemd[1889]: Reached target timers.target - Timers. Mar 17 17:59:12.692269 systemd[1889]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:59:12.707349 systemd[1889]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:59:12.707503 systemd[1889]: Reached target sockets.target - Sockets. Mar 17 17:59:12.707555 systemd[1889]: Reached target basic.target - Basic System. Mar 17 17:59:12.707605 systemd[1889]: Reached target default.target - Main User Target. Mar 17 17:59:12.707640 systemd[1889]: Startup finished in 299ms. Mar 17 17:59:12.707882 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:59:12.717895 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:59:12.719423 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:59:13.377989 waagent[1859]: 2025-03-17T17:59:13.377882Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.378349Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.379172Z INFO Daemon Daemon Python: 3.11.11 Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.380098Z INFO Daemon Daemon Run daemon Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.380788Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.381426Z INFO Daemon Daemon Using waagent for provisioning Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.382302Z INFO Daemon Daemon Activate resource disk Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.382925Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.388469Z INFO Daemon Daemon Found device: None Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.389192Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.390106Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.391257Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:59:13.411313 waagent[1859]: 2025-03-17T17:59:13.392130Z INFO Daemon Daemon Running default provisioning handler Mar 17 17:59:13.414646 waagent[1859]: 2025-03-17T17:59:13.414553Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 17 17:59:13.420967 waagent[1859]: 2025-03-17T17:59:13.420904Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 17:59:13.425123 waagent[1859]: 2025-03-17T17:59:13.425016Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 17:59:13.428803 waagent[1859]: 2025-03-17T17:59:13.425191Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 17:59:13.535973 waagent[1859]: 2025-03-17T17:59:13.533177Z INFO Daemon Daemon Successfully mounted dvd Mar 17 17:59:13.565910 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.567612Z INFO Daemon Daemon Detect protocol endpoint Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.567992Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.568833Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.569197Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.569783Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 17:59:13.580910 waagent[1859]: 2025-03-17T17:59:13.570042Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 17:59:13.615184 waagent[1859]: 2025-03-17T17:59:13.615114Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 17:59:13.622335 waagent[1859]: 2025-03-17T17:59:13.615745Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 17:59:13.622335 waagent[1859]: 2025-03-17T17:59:13.616360Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 17:59:13.812006 waagent[1859]: 2025-03-17T17:59:13.811905Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 17:59:13.815662 waagent[1859]: 2025-03-17T17:59:13.815593Z INFO Daemon Daemon Forcing an update of the goal state. Mar 17 17:59:13.822315 waagent[1859]: 2025-03-17T17:59:13.822255Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:59:13.838068 waagent[1859]: 2025-03-17T17:59:13.838008Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.166 Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.838703Z INFO Daemon Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.839438Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bc1f4ddc-7ab3-49f1-9a3f-d37e05c05abc eTag: 5353848745919268482 source: Fabric] Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.840630Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.841597Z INFO Daemon Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.842243Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:59:13.856761 waagent[1859]: 2025-03-17T17:59:13.846915Z INFO Daemon Daemon Downloading artifacts profile blob Mar 17 17:59:13.933602 waagent[1859]: 2025-03-17T17:59:13.933495Z INFO Daemon Downloaded certificate {'thumbprint': 'EEAD80345617B3827A146A3C8D0A04E78DE1B06F', 'hasPrivateKey': True} Mar 17 17:59:13.938664 waagent[1859]: 2025-03-17T17:59:13.938595Z INFO Daemon Fetch goal state completed Mar 17 17:59:13.949001 waagent[1859]: 2025-03-17T17:59:13.948948Z INFO Daemon Daemon Starting provisioning Mar 17 17:59:13.955567 waagent[1859]: 2025-03-17T17:59:13.949216Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 17:59:13.955567 waagent[1859]: 2025-03-17T17:59:13.950026Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-2af36eae3a] Mar 17 17:59:13.986401 waagent[1859]: 2025-03-17T17:59:13.986302Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-2af36eae3a] Mar 17 17:59:13.993743 waagent[1859]: 2025-03-17T17:59:13.986872Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 17:59:13.993743 waagent[1859]: 2025-03-17T17:59:13.987638Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 17:59:13.998124 systemd-networkd[1521]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:13.998134 systemd-networkd[1521]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:13.998182 systemd-networkd[1521]: eth0: DHCP lease lost Mar 17 17:59:13.999378 waagent[1859]: 2025-03-17T17:59:13.999279Z INFO Daemon Daemon Create user account if not exists Mar 17 17:59:14.013706 waagent[1859]: 2025-03-17T17:59:13.999684Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 17:59:14.013706 waagent[1859]: 2025-03-17T17:59:14.000405Z INFO Daemon Daemon Configure sudoer Mar 17 17:59:14.013706 waagent[1859]: 2025-03-17T17:59:14.001389Z INFO Daemon Daemon Configure sshd Mar 17 17:59:14.013706 waagent[1859]: 2025-03-17T17:59:14.002080Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 17 17:59:14.013706 waagent[1859]: 2025-03-17T17:59:14.002668Z INFO Daemon Daemon Deploy ssh public key. Mar 17 17:59:14.049779 systemd-networkd[1521]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 Mar 17 17:59:15.126746 waagent[1859]: 2025-03-17T17:59:15.126656Z INFO Daemon Daemon Provisioning complete Mar 17 17:59:15.137962 waagent[1859]: 2025-03-17T17:59:15.137899Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 17:59:15.144651 waagent[1859]: 2025-03-17T17:59:15.138216Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 17:59:15.144651 waagent[1859]: 2025-03-17T17:59:15.139287Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 17 17:59:15.266255 waagent[1941]: 2025-03-17T17:59:15.266147Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 17 17:59:15.266698 waagent[1941]: 2025-03-17T17:59:15.266321Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 17 17:59:15.266698 waagent[1941]: 2025-03-17T17:59:15.266402Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 17 17:59:15.274776 waagent[1941]: 2025-03-17T17:59:15.274677Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 17:59:15.274984 waagent[1941]: 2025-03-17T17:59:15.274934Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:59:15.275062 waagent[1941]: 2025-03-17T17:59:15.275030Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:59:15.282793 waagent[1941]: 2025-03-17T17:59:15.282707Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:59:15.288052 waagent[1941]: 2025-03-17T17:59:15.287996Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 Mar 17 17:59:15.288513 waagent[1941]: 2025-03-17T17:59:15.288455Z INFO ExtHandler Mar 17 17:59:15.288593 waagent[1941]: 2025-03-17T17:59:15.288549Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4efdb5b1-c054-42f8-a88b-a13840c88383 eTag: 5353848745919268482 source: Fabric] Mar 17 17:59:15.288922 waagent[1941]: 2025-03-17T17:59:15.288869Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:59:15.289482 waagent[1941]: 2025-03-17T17:59:15.289423Z INFO ExtHandler Mar 17 17:59:15.289546 waagent[1941]: 2025-03-17T17:59:15.289508Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:59:15.292930 waagent[1941]: 2025-03-17T17:59:15.292892Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:59:15.355796 waagent[1941]: 2025-03-17T17:59:15.355670Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EEAD80345617B3827A146A3C8D0A04E78DE1B06F', 'hasPrivateKey': True} Mar 17 17:59:15.356317 waagent[1941]: 2025-03-17T17:59:15.356258Z INFO ExtHandler Fetch goal state completed Mar 17 17:59:15.368735 waagent[1941]: 2025-03-17T17:59:15.368671Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1941 Mar 17 17:59:15.368907 waagent[1941]: 2025-03-17T17:59:15.368855Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 17 17:59:15.370426 waagent[1941]: 2025-03-17T17:59:15.370365Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 17:59:15.370797 waagent[1941]: 2025-03-17T17:59:15.370746Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 17:59:15.416416 waagent[1941]: 2025-03-17T17:59:15.416286Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 17:59:15.416617 waagent[1941]: 2025-03-17T17:59:15.416552Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 17:59:15.424032 waagent[1941]: 2025-03-17T17:59:15.423777Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 17:59:15.430708 systemd[1]: Reload requested from client PID 1954 ('systemctl') (unit waagent.service)... Mar 17 17:59:15.430764 systemd[1]: Reloading... Mar 17 17:59:15.517783 zram_generator::config[1990]: No configuration found. Mar 17 17:59:15.653358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:15.770270 systemd[1]: Reloading finished in 339 ms. Mar 17 17:59:15.788472 waagent[1941]: 2025-03-17T17:59:15.787955Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 17 17:59:15.797142 systemd[1]: Reload requested from client PID 2050 ('systemctl') (unit waagent.service)... Mar 17 17:59:15.797158 systemd[1]: Reloading... Mar 17 17:59:15.866758 zram_generator::config[2085]: No configuration found. Mar 17 17:59:16.012040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:16.128922 systemd[1]: Reloading finished in 331 ms. Mar 17 17:59:16.148087 waagent[1941]: 2025-03-17T17:59:16.147004Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 17 17:59:16.148087 waagent[1941]: 2025-03-17T17:59:16.147215Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 17 17:59:16.617364 waagent[1941]: 2025-03-17T17:59:16.617262Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 17:59:16.618173 waagent[1941]: 2025-03-17T17:59:16.618090Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 17 17:59:16.619076 waagent[1941]: 2025-03-17T17:59:16.619010Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 17:59:16.620173 waagent[1941]: 2025-03-17T17:59:16.620097Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:59:16.620251 waagent[1941]: 2025-03-17T17:59:16.620191Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 17:59:16.620672 waagent[1941]: 2025-03-17T17:59:16.620509Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:59:16.620672 waagent[1941]: 2025-03-17T17:59:16.620600Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 17:59:16.620830 waagent[1941]: 2025-03-17T17:59:16.620706Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 17:59:16.621279 waagent[1941]: 2025-03-17T17:59:16.621226Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:59:16.621446 waagent[1941]: 2025-03-17T17:59:16.621375Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:59:16.621928 waagent[1941]: 2025-03-17T17:59:16.621859Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 17:59:16.622404 waagent[1941]: 2025-03-17T17:59:16.622316Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 17:59:16.622513 waagent[1941]: 2025-03-17T17:59:16.622435Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 17:59:16.622746 waagent[1941]: 2025-03-17T17:59:16.622670Z INFO EnvHandler ExtHandler Configure routes Mar 17 17:59:16.623342 waagent[1941]: 2025-03-17T17:59:16.623265Z INFO EnvHandler ExtHandler Gateway:None Mar 17 17:59:16.623709 waagent[1941]: 2025-03-17T17:59:16.623638Z INFO EnvHandler ExtHandler Routes:None Mar 17 17:59:16.624188 waagent[1941]: 2025-03-17T17:59:16.624130Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 17:59:16.624188 waagent[1941]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 17:59:16.624188 waagent[1941]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 17:59:16.624188 waagent[1941]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 17:59:16.624188 waagent[1941]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:59:16.624188 waagent[1941]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:59:16.624188 waagent[1941]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:59:16.624840 waagent[1941]: 2025-03-17T17:59:16.624743Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 17:59:16.631536 waagent[1941]: 2025-03-17T17:59:16.631492Z INFO ExtHandler ExtHandler Mar 17 17:59:16.631620 waagent[1941]: 2025-03-17T17:59:16.631583Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 12a99787-789d-4289-97bc-45d6ee71f297 correlation b71533df-eeed-45a7-9b02-fa41d17ed4fd created: 2025-03-17T17:58:00.953629Z] Mar 17 17:59:16.632007 waagent[1941]: 2025-03-17T17:59:16.631957Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:59:16.632507 waagent[1941]: 2025-03-17T17:59:16.632461Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 17 17:59:16.670342 waagent[1941]: 2025-03-17T17:59:16.670274Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6B79C559-84DF-4964-97CE-49AD98DBBD7B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 17 17:59:16.744177 waagent[1941]: 2025-03-17T17:59:16.743592Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 17:59:16.744177 waagent[1941]: Executing ['ip', '-a', '-o', 'link']: Mar 17 17:59:16.744177 waagent[1941]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 17:59:16.744177 waagent[1941]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:2c:e2 brd ff:ff:ff:ff:ff:ff Mar 17 17:59:16.744177 waagent[1941]: 3: enP16961s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:2c:e2 brd ff:ff:ff:ff:ff:ff\ altname enP16961p0s2 Mar 17 17:59:16.744177 waagent[1941]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 17:59:16.744177 waagent[1941]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 17:59:16.744177 waagent[1941]: 2: eth0 inet 10.200.4.30/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 17:59:16.744177 waagent[1941]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 17:59:16.744177 waagent[1941]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 17 17:59:16.744177 waagent[1941]: 2: eth0 inet6 fe80::6245:bdff:fef1:2ce2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:59:16.744177 waagent[1941]: 3: enP16961s1 inet6 fe80::6245:bdff:fef1:2ce2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:59:16.764055 waagent[1941]: 2025-03-17T17:59:16.763979Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 17 17:59:16.764055 waagent[1941]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.764055 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.764055 waagent[1941]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.764055 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.764055 waagent[1941]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.764055 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.764055 waagent[1941]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:59:16.764055 waagent[1941]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:59:16.764055 waagent[1941]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:59:16.768813 waagent[1941]: 2025-03-17T17:59:16.768760Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 17:59:16.768813 waagent[1941]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.768813 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.768813 waagent[1941]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.768813 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.768813 waagent[1941]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:59:16.768813 waagent[1941]: pkts bytes target prot opt in out source destination Mar 17 17:59:16.768813 waagent[1941]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:59:16.768813 waagent[1941]: 3 363 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:59:16.768813 waagent[1941]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:59:16.769261 waagent[1941]: 2025-03-17T17:59:16.769066Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 17:59:22.745156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:59:22.756983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:22.857672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:22.862050 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:23.433507 kubelet[2185]: E0317 17:59:23.433447 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:23.437202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:23.437416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:23.437861 systemd[1]: kubelet.service: Consumed 131ms CPU time, 97.5M memory peak. Mar 17 17:59:33.495305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:59:33.506945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:33.597333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:33.608046 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:34.234865 kubelet[2200]: E0317 17:59:34.234804 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:34.237463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:34.237667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:34.238112 systemd[1]: kubelet.service: Consumed 130ms CPU time, 95.7M memory peak. Mar 17 17:59:34.306369 chronyd[1743]: Selected source PHC0 Mar 17 17:59:44.245263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:59:44.257958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:44.358031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:44.371086 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:44.405492 kubelet[2215]: E0317 17:59:44.405438 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:44.407732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:44.407927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:44.408332 systemd[1]: kubelet.service: Consumed 125ms CPU time, 95.5M memory peak. Mar 17 17:59:53.603541 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 17 17:59:54.495266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:59:54.507976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:54.596112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:54.600424 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:55.215831 kubelet[2231]: E0317 17:59:55.215764 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:55.218497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:55.218747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:55.219245 systemd[1]: kubelet.service: Consumed 130ms CPU time, 95.1M memory peak. Mar 17 17:59:55.626955 update_engine[1712]: I20250317 17:59:55.626837 1712 update_attempter.cc:509] Updating boot flags... Mar 17 17:59:55.678046 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2253) Mar 17 18:00:05.245131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:00:05.253282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:05.367823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:05.372225 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:05.407311 kubelet[2309]: E0317 18:00:05.407258 2309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:05.409773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:05.409983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:05.410375 systemd[1]: kubelet.service: Consumed 128ms CPU time, 99.5M memory peak. Mar 17 18:00:07.932131 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 18:00:07.937010 systemd[1]: Started sshd@0-10.200.4.30:22-10.200.16.10:43126.service - OpenSSH per-connection server daemon (10.200.16.10:43126). Mar 17 18:00:08.692401 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 43126 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:08.694176 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:08.700372 systemd-logind[1710]: New session 3 of user core. Mar 17 18:00:08.709881 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 18:00:09.218052 systemd[1]: Started sshd@1-10.200.4.30:22-10.200.16.10:58764.service - OpenSSH per-connection server daemon (10.200.16.10:58764). Mar 17 18:00:09.808086 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 58764 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:09.809812 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:09.815868 systemd-logind[1710]: New session 4 of user core. Mar 17 18:00:09.822899 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 18:00:10.245799 sshd[2325]: Connection closed by 10.200.16.10 port 58764 Mar 17 18:00:10.246599 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:10.250963 systemd[1]: sshd@1-10.200.4.30:22-10.200.16.10:58764.service: Deactivated successfully. Mar 17 18:00:10.253140 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:00:10.254214 systemd-logind[1710]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:00:10.255345 systemd-logind[1710]: Removed session 4. Mar 17 18:00:10.354032 systemd[1]: Started sshd@2-10.200.4.30:22-10.200.16.10:58780.service - OpenSSH per-connection server daemon (10.200.16.10:58780). Mar 17 18:00:10.942089 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 58780 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:10.943702 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:10.949553 systemd-logind[1710]: New session 5 of user core. Mar 17 18:00:10.959901 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 18:00:11.356038 sshd[2333]: Connection closed by 10.200.16.10 port 58780 Mar 17 18:00:11.356859 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:11.361034 systemd[1]: sshd@2-10.200.4.30:22-10.200.16.10:58780.service: Deactivated successfully. Mar 17 18:00:11.362849 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:00:11.363571 systemd-logind[1710]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:00:11.364601 systemd-logind[1710]: Removed session 5. Mar 17 18:00:11.465026 systemd[1]: Started sshd@3-10.200.4.30:22-10.200.16.10:58784.service - OpenSSH per-connection server daemon (10.200.16.10:58784). Mar 17 18:00:12.051080 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 58784 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:12.052689 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:12.057852 systemd-logind[1710]: New session 6 of user core. Mar 17 18:00:12.062902 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 18:00:12.468571 sshd[2341]: Connection closed by 10.200.16.10 port 58784 Mar 17 18:00:12.469642 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:12.473740 systemd[1]: sshd@3-10.200.4.30:22-10.200.16.10:58784.service: Deactivated successfully. Mar 17 18:00:12.475516 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:00:12.476233 systemd-logind[1710]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:00:12.477102 systemd-logind[1710]: Removed session 6. Mar 17 18:00:12.577063 systemd[1]: Started sshd@4-10.200.4.30:22-10.200.16.10:58788.service - OpenSSH per-connection server daemon (10.200.16.10:58788). Mar 17 18:00:13.164269 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 58788 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:13.165966 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:13.170377 systemd-logind[1710]: New session 7 of user core. Mar 17 18:00:13.180872 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 18:00:13.661673 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:00:13.662056 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:00:13.673056 sudo[2350]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:13.766360 sshd[2349]: Connection closed by 10.200.16.10 port 58788 Mar 17 18:00:13.767470 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:13.770799 systemd[1]: sshd@4-10.200.4.30:22-10.200.16.10:58788.service: Deactivated successfully. Mar 17 18:00:13.772802 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:00:13.774344 systemd-logind[1710]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:00:13.775292 systemd-logind[1710]: Removed session 7. Mar 17 18:00:13.882042 systemd[1]: Started sshd@5-10.200.4.30:22-10.200.16.10:58804.service - OpenSSH per-connection server daemon (10.200.16.10:58804). Mar 17 18:00:14.469580 sshd[2356]: Accepted publickey for core from 10.200.16.10 port 58804 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:14.471330 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:14.475824 systemd-logind[1710]: New session 8 of user core. Mar 17 18:00:14.486869 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 18:00:14.794592 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:00:14.794962 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:00:14.798207 sudo[2360]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:14.803046 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 18:00:14.803381 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:00:14.816120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:00:14.842488 augenrules[2382]: No rules Mar 17 18:00:14.843910 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:00:14.844172 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:00:14.845496 sudo[2359]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:14.941107 sshd[2358]: Connection closed by 10.200.16.10 port 58804 Mar 17 18:00:14.941936 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:14.945811 systemd[1]: sshd@5-10.200.4.30:22-10.200.16.10:58804.service: Deactivated successfully. Mar 17 18:00:14.947619 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:00:14.948368 systemd-logind[1710]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:00:14.949221 systemd-logind[1710]: Removed session 8. Mar 17 18:00:15.056058 systemd[1]: Started sshd@6-10.200.4.30:22-10.200.16.10:58810.service - OpenSSH per-connection server daemon (10.200.16.10:58810). Mar 17 18:00:15.495166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:00:15.502005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:15.651844 sshd[2391]: Accepted publickey for core from 10.200.16.10 port 58810 ssh2: RSA SHA256:sqIb/6ECvJ+NcGlXWH+RGh9zoFex64JsYEL5FCi862w Mar 17 18:00:15.653500 sshd-session[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:15.659198 systemd-logind[1710]: New session 9 of user core. Mar 17 18:00:15.664918 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:00:15.845651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:15.850100 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:15.886347 kubelet[2402]: E0317 18:00:15.886241 2402 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:15.888648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:15.888860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:15.889249 systemd[1]: kubelet.service: Consumed 135ms CPU time, 95.5M memory peak. Mar 17 18:00:15.979218 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:00:15.979572 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:00:16.658146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:16.658376 systemd[1]: kubelet.service: Consumed 135ms CPU time, 95.5M memory peak. Mar 17 18:00:16.665012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:16.711963 systemd[1]: Reload requested from client PID 2441 ('systemctl') (unit session-9.scope)... Mar 17 18:00:16.711983 systemd[1]: Reloading... Mar 17 18:00:16.831787 zram_generator::config[2487]: No configuration found. Mar 17 18:00:16.971600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:00:17.087460 systemd[1]: Reloading finished in 374 ms. Mar 17 18:00:17.134289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.141895 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:17.144154 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:00:17.144443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.144486 systemd[1]: kubelet.service: Consumed 108ms CPU time, 83.5M memory peak. Mar 17 18:00:17.154048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:17.328946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.340375 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:00:17.987358 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:00:17.987358 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:00:17.987358 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:00:17.987913 kubelet[2559]: I0317 18:00:17.987451 2559 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:00:18.319992 kubelet[2559]: I0317 18:00:18.319948 2559 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:00:18.319992 kubelet[2559]: I0317 18:00:18.319979 2559 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:00:18.320299 kubelet[2559]: I0317 18:00:18.320277 2559 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:00:18.344612 kubelet[2559]: I0317 18:00:18.344568 2559 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:00:18.351601 kubelet[2559]: E0317 18:00:18.351559 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:00:18.351601 kubelet[2559]: I0317 18:00:18.351599 2559 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:00:18.356595 kubelet[2559]: I0317 18:00:18.356194 2559 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:00:18.357684 kubelet[2559]: I0317 18:00:18.357644 2559 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:00:18.357922 kubelet[2559]: I0317 18:00:18.357865 2559 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:00:18.358098 kubelet[2559]: I0317 18:00:18.357895 2559 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:00:18.358252 kubelet[2559]: I0317 18:00:18.358109 2559 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:00:18.358252 kubelet[2559]: I0317 18:00:18.358122 2559 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:00:18.358334 kubelet[2559]: I0317 18:00:18.358268 2559 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:18.359833 kubelet[2559]: I0317 18:00:18.359811 2559 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:00:18.359833 kubelet[2559]: I0317 18:00:18.359837 2559 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:00:18.359947 kubelet[2559]: I0317 18:00:18.359876 2559 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:00:18.359947 kubelet[2559]: I0317 18:00:18.359894 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:00:18.360165 kubelet[2559]: E0317 18:00:18.360075 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:18.360165 kubelet[2559]: E0317 18:00:18.360105 2559 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:18.364000 kubelet[2559]: I0317 18:00:18.363980 2559 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:00:18.365640 kubelet[2559]: I0317 18:00:18.365619 2559 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:00:18.366975 kubelet[2559]: W0317 18:00:18.365901 2559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.4.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 18:00:18.366975 kubelet[2559]: E0317 18:00:18.365937 2559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.4.30\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:00:18.366975 kubelet[2559]: W0317 18:00:18.366154 2559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 18:00:18.366975 kubelet[2559]: E0317 18:00:18.366180 2559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:00:18.366975 kubelet[2559]: W0317 18:00:18.366160 2559 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:00:18.366975 kubelet[2559]: I0317 18:00:18.366876 2559 server.go:1269] "Started kubelet" Mar 17 18:00:18.368978 kubelet[2559]: I0317 18:00:18.368743 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:00:18.373986 kubelet[2559]: I0317 18:00:18.373959 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:00:18.374215 kubelet[2559]: I0317 18:00:18.374177 2559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:00:18.375569 kubelet[2559]: I0317 18:00:18.375548 2559 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:00:18.376330 kubelet[2559]: I0317 18:00:18.376309 2559 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:00:18.376716 kubelet[2559]: E0317 18:00:18.376691 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:18.379574 kubelet[2559]: I0317 18:00:18.379548 2559 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:00:18.379648 kubelet[2559]: I0317 18:00:18.379625 2559 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:00:18.381233 kubelet[2559]: I0317 18:00:18.381174 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:00:18.381545 kubelet[2559]: I0317 18:00:18.381529 2559 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:00:18.388860 kubelet[2559]: I0317 18:00:18.388837 2559 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:00:18.389010 kubelet[2559]: I0317 18:00:18.388997 2559 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:00:18.389186 kubelet[2559]: I0317 18:00:18.389164 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:00:18.399660 kubelet[2559]: E0317 18:00:18.399610 2559 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.30\" not found" node="10.200.4.30" Mar 17 18:00:18.415775 kubelet[2559]: E0317 18:00:18.415736 2559 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:00:18.418469 kubelet[2559]: I0317 18:00:18.418433 2559 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:00:18.418469 kubelet[2559]: I0317 18:00:18.418451 2559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:00:18.418647 kubelet[2559]: I0317 18:00:18.418483 2559 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:18.424003 kubelet[2559]: I0317 18:00:18.423975 2559 policy_none.go:49] "None policy: Start" Mar 17 18:00:18.424644 kubelet[2559]: I0317 18:00:18.424621 2559 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:00:18.424818 kubelet[2559]: I0317 18:00:18.424650 2559 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:00:18.432712 kubelet[2559]: I0317 18:00:18.431711 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:00:18.434162 kubelet[2559]: I0317 18:00:18.434140 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:00:18.434298 kubelet[2559]: I0317 18:00:18.434286 2559 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:00:18.434379 kubelet[2559]: I0317 18:00:18.434369 2559 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:00:18.434541 kubelet[2559]: E0317 18:00:18.434521 2559 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:00:18.443461 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 18:00:18.456655 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 18:00:18.459604 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 18:00:18.467761 kubelet[2559]: I0317 18:00:18.467376 2559 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:00:18.467761 kubelet[2559]: I0317 18:00:18.467544 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:00:18.467761 kubelet[2559]: I0317 18:00:18.467555 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:00:18.468780 kubelet[2559]: I0317 18:00:18.468763 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:00:18.469594 kubelet[2559]: E0317 18:00:18.469524 2559 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.30\" not found" Mar 17 18:00:18.568960 kubelet[2559]: I0317 18:00:18.568633 2559 kubelet_node_status.go:72] "Attempting to register node" node="10.200.4.30" Mar 17 18:00:18.575177 kubelet[2559]: I0317 18:00:18.575042 2559 kubelet_node_status.go:75] "Successfully registered node" node="10.200.4.30" Mar 17 18:00:18.575177 kubelet[2559]: E0317 18:00:18.575078 2559 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.200.4.30\": node \"10.200.4.30\" not found" Mar 17 18:00:18.590563 kubelet[2559]: E0317 18:00:18.590530 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:18.691624 kubelet[2559]: E0317 18:00:18.691566 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:18.792383 kubelet[2559]: E0317 18:00:18.792323 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:18.862409 sudo[2409]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:18.893095 kubelet[2559]: E0317 18:00:18.893044 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:18.974516 sshd[2396]: Connection closed by 10.200.16.10 port 58810 Mar 17 18:00:18.975351 sshd-session[2391]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:18.978806 systemd[1]: sshd@6-10.200.4.30:22-10.200.16.10:58810.service: Deactivated successfully. Mar 17 18:00:18.981271 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:00:18.981501 systemd[1]: session-9.scope: Consumed 430ms CPU time, 77.3M memory peak. Mar 17 18:00:18.983853 systemd-logind[1710]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:00:18.984947 systemd-logind[1710]: Removed session 9. Mar 17 18:00:18.993581 kubelet[2559]: E0317 18:00:18.993550 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.094376 kubelet[2559]: E0317 18:00:19.094314 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.195270 kubelet[2559]: E0317 18:00:19.195109 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.295932 kubelet[2559]: E0317 18:00:19.295876 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.322228 kubelet[2559]: I0317 18:00:19.322169 2559 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:00:19.322481 kubelet[2559]: W0317 18:00:19.322424 2559 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:00:19.322599 kubelet[2559]: W0317 18:00:19.322584 2559 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:00:19.360843 kubelet[2559]: E0317 18:00:19.360783 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:19.396086 kubelet[2559]: E0317 18:00:19.396028 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.496560 kubelet[2559]: E0317 18:00:19.496521 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.597303 kubelet[2559]: E0317 18:00:19.597240 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.698094 kubelet[2559]: E0317 18:00:19.698034 2559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" Mar 17 18:00:19.800000 kubelet[2559]: I0317 18:00:19.799858 2559 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:00:19.800545 containerd[1736]: time="2025-03-17T18:00:19.800452863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:00:19.801307 kubelet[2559]: I0317 18:00:19.800795 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:00:20.361642 kubelet[2559]: I0317 18:00:20.361562 2559 apiserver.go:52] "Watching apiserver" Mar 17 18:00:20.362193 kubelet[2559]: E0317 18:00:20.361556 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:20.367100 kubelet[2559]: E0317 18:00:20.366260 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:20.378685 systemd[1]: Created slice kubepods-besteffort-podb770eb4a_1d6c_4ba6_bfd9_e270f95c5dca.slice - libcontainer container kubepods-besteffort-podb770eb4a_1d6c_4ba6_bfd9_e270f95c5dca.slice. Mar 17 18:00:20.381942 kubelet[2559]: I0317 18:00:20.381919 2559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:00:20.390973 kubelet[2559]: I0317 18:00:20.390523 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9dadf48-23d5-4e7c-a2b0-a9c1e1747298-socket-dir\") pod \"csi-node-driver-fk9nq\" (UID: \"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298\") " pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:20.390973 kubelet[2559]: I0317 18:00:20.390571 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-policysync\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.390973 kubelet[2559]: I0317 18:00:20.390599 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-var-lib-calico\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.390973 kubelet[2559]: I0317 18:00:20.390624 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-cni-log-dir\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.390973 kubelet[2559]: I0317 18:00:20.390667 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5swr\" (UniqueName: \"kubernetes.io/projected/09032f3f-cbec-4fc9-be73-876191c0a07b-kube-api-access-t5swr\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391254 kubelet[2559]: I0317 18:00:20.390692 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a9dadf48-23d5-4e7c-a2b0-a9c1e1747298-varrun\") pod \"csi-node-driver-fk9nq\" (UID: \"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298\") " pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:20.391254 kubelet[2559]: I0317 18:00:20.390747 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9dadf48-23d5-4e7c-a2b0-a9c1e1747298-registration-dir\") pod \"csi-node-driver-fk9nq\" (UID: \"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298\") " pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:20.391254 kubelet[2559]: I0317 18:00:20.390773 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5ls\" (UniqueName: \"kubernetes.io/projected/a9dadf48-23d5-4e7c-a2b0-a9c1e1747298-kube-api-access-hx5ls\") pod \"csi-node-driver-fk9nq\" (UID: \"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298\") " pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:20.391254 kubelet[2559]: I0317 18:00:20.390797 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca-kube-proxy\") pod \"kube-proxy-pp4zt\" (UID: \"b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca\") " pod="kube-system/kube-proxy-pp4zt" Mar 17 18:00:20.391254 kubelet[2559]: I0317 18:00:20.390836 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-lib-modules\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391443 kubelet[2559]: I0317 18:00:20.390858 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/09032f3f-cbec-4fc9-be73-876191c0a07b-node-certs\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391443 kubelet[2559]: I0317 18:00:20.390883 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-cni-net-dir\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391443 kubelet[2559]: I0317 18:00:20.390919 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca-xtables-lock\") pod \"kube-proxy-pp4zt\" (UID: \"b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca\") " pod="kube-system/kube-proxy-pp4zt" Mar 17 18:00:20.391443 kubelet[2559]: I0317 18:00:20.390940 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca-lib-modules\") pod \"kube-proxy-pp4zt\" (UID: \"b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca\") " pod="kube-system/kube-proxy-pp4zt" Mar 17 18:00:20.391443 kubelet[2559]: I0317 18:00:20.390995 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9dadf48-23d5-4e7c-a2b0-a9c1e1747298-kubelet-dir\") pod \"csi-node-driver-fk9nq\" (UID: \"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298\") " pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:20.391635 kubelet[2559]: I0317 18:00:20.391019 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dsf2\" (UniqueName: \"kubernetes.io/projected/b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca-kube-api-access-6dsf2\") pod \"kube-proxy-pp4zt\" (UID: \"b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca\") " pod="kube-system/kube-proxy-pp4zt" Mar 17 18:00:20.391635 kubelet[2559]: I0317 18:00:20.391038 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09032f3f-cbec-4fc9-be73-876191c0a07b-tigera-ca-bundle\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391635 kubelet[2559]: I0317 18:00:20.391071 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-cni-bin-dir\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391635 kubelet[2559]: I0317 18:00:20.391092 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-flexvol-driver-host\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391635 kubelet[2559]: I0317 18:00:20.391115 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-xtables-lock\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.391859 kubelet[2559]: I0317 18:00:20.391148 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/09032f3f-cbec-4fc9-be73-876191c0a07b-var-run-calico\") pod \"calico-node-97t4f\" (UID: \"09032f3f-cbec-4fc9-be73-876191c0a07b\") " pod="calico-system/calico-node-97t4f" Mar 17 18:00:20.394030 systemd[1]: Created slice kubepods-besteffort-pod09032f3f_cbec_4fc9_be73_876191c0a07b.slice - libcontainer container kubepods-besteffort-pod09032f3f_cbec_4fc9_be73_876191c0a07b.slice. Mar 17 18:00:20.501442 kubelet[2559]: E0317 18:00:20.501406 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:20.505741 kubelet[2559]: W0317 18:00:20.501787 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:20.505741 kubelet[2559]: E0317 18:00:20.501833 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:20.519550 kubelet[2559]: E0317 18:00:20.519523 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:20.520202 kubelet[2559]: W0317 18:00:20.519694 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:20.520202 kubelet[2559]: E0317 18:00:20.519747 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:20.522748 kubelet[2559]: E0317 18:00:20.520825 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:20.522865 kubelet[2559]: W0317 18:00:20.522848 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:20.522946 kubelet[2559]: E0317 18:00:20.522933 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:20.523302 kubelet[2559]: E0317 18:00:20.523287 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:20.523408 kubelet[2559]: W0317 18:00:20.523396 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:20.525756 kubelet[2559]: E0317 18:00:20.524762 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:20.693684 containerd[1736]: time="2025-03-17T18:00:20.693540915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp4zt,Uid:b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:20.698360 containerd[1736]: time="2025-03-17T18:00:20.698286822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97t4f,Uid:09032f3f-cbec-4fc9-be73-876191c0a07b,Namespace:calico-system,Attempt:0,}" Mar 17 18:00:21.362784 kubelet[2559]: E0317 18:00:21.362742 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:21.435845 kubelet[2559]: E0317 18:00:21.435782 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:21.649211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385015155.mount: Deactivated successfully. Mar 17 18:00:21.673573 containerd[1736]: time="2025-03-17T18:00:21.673511618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:21.680589 containerd[1736]: time="2025-03-17T18:00:21.680547276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 17 18:00:21.682673 containerd[1736]: time="2025-03-17T18:00:21.682629523Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:21.686519 containerd[1736]: time="2025-03-17T18:00:21.686475309Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:21.688399 containerd[1736]: time="2025-03-17T18:00:21.688361052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:00:21.692767 containerd[1736]: time="2025-03-17T18:00:21.692703349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:21.694105 containerd[1736]: time="2025-03-17T18:00:21.693537768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 999.85015ms" Mar 17 18:00:21.698632 containerd[1736]: time="2025-03-17T18:00:21.698597481Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.000178056s" Mar 17 18:00:22.331683 containerd[1736]: time="2025-03-17T18:00:22.331482291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:22.331683 containerd[1736]: time="2025-03-17T18:00:22.331536093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:22.331683 containerd[1736]: time="2025-03-17T18:00:22.331551393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:22.331683 containerd[1736]: time="2025-03-17T18:00:22.331633295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:22.336235 containerd[1736]: time="2025-03-17T18:00:22.336055094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:22.336235 containerd[1736]: time="2025-03-17T18:00:22.336170797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:22.336580 containerd[1736]: time="2025-03-17T18:00:22.336216398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:22.338137 containerd[1736]: time="2025-03-17T18:00:22.336780410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:22.363271 kubelet[2559]: E0317 18:00:22.363232 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:22.755907 systemd[1]: Started cri-containerd-453c29476e3341dec52d27ed58dc754cb7ffa4faf9a503effaa2ab7a6d806a40.scope - libcontainer container 453c29476e3341dec52d27ed58dc754cb7ffa4faf9a503effaa2ab7a6d806a40. Mar 17 18:00:22.757844 systemd[1]: Started cri-containerd-7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31.scope - libcontainer container 7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31. Mar 17 18:00:22.792214 containerd[1736]: time="2025-03-17T18:00:22.792162251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp4zt,Uid:b770eb4a-1d6c-4ba6-bfd9-e270f95c5dca,Namespace:kube-system,Attempt:0,} returns sandbox id \"453c29476e3341dec52d27ed58dc754cb7ffa4faf9a503effaa2ab7a6d806a40\"" Mar 17 18:00:22.795887 containerd[1736]: time="2025-03-17T18:00:22.795752330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:00:22.798843 containerd[1736]: time="2025-03-17T18:00:22.798802298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97t4f,Uid:09032f3f-cbec-4fc9-be73-876191c0a07b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\"" Mar 17 18:00:23.363910 kubelet[2559]: E0317 18:00:23.363861 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:23.435741 kubelet[2559]: E0317 18:00:23.434874 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:24.082577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858295410.mount: Deactivated successfully. Mar 17 18:00:24.365070 kubelet[2559]: E0317 18:00:24.364941 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:24.589808 containerd[1736]: time="2025-03-17T18:00:24.589752343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:24.591785 containerd[1736]: time="2025-03-17T18:00:24.591734987Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354638" Mar 17 18:00:24.598240 containerd[1736]: time="2025-03-17T18:00:24.598178830Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:24.602263 containerd[1736]: time="2025-03-17T18:00:24.602207220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:24.603000 containerd[1736]: time="2025-03-17T18:00:24.602774532Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 1.806982001s" Mar 17 18:00:24.603000 containerd[1736]: time="2025-03-17T18:00:24.602813333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:00:24.604667 containerd[1736]: time="2025-03-17T18:00:24.604456170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 18:00:24.609411 containerd[1736]: time="2025-03-17T18:00:24.609256477Z" level=info msg="CreateContainer within sandbox \"453c29476e3341dec52d27ed58dc754cb7ffa4faf9a503effaa2ab7a6d806a40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:00:24.659654 containerd[1736]: time="2025-03-17T18:00:24.659494294Z" level=info msg="CreateContainer within sandbox \"453c29476e3341dec52d27ed58dc754cb7ffa4faf9a503effaa2ab7a6d806a40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce\"" Mar 17 18:00:24.660491 containerd[1736]: time="2025-03-17T18:00:24.660454816Z" level=info msg="StartContainer for \"160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce\"" Mar 17 18:00:24.695900 systemd[1]: Started cri-containerd-160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce.scope - libcontainer container 160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce. Mar 17 18:00:24.728039 containerd[1736]: time="2025-03-17T18:00:24.727993618Z" level=info msg="StartContainer for \"160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce\" returns successfully" Mar 17 18:00:24.736837 systemd[1]: run-containerd-runc-k8s.io-160f8d02c0619ae93007ec94d70ddc46aee86312fd8b57e2640a12f47431e1ce-runc.dTuLwb.mount: Deactivated successfully. Mar 17 18:00:25.366109 kubelet[2559]: E0317 18:00:25.366068 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:25.435083 kubelet[2559]: E0317 18:00:25.435009 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:25.471820 kubelet[2559]: I0317 18:00:25.471761 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pp4zt" podStartSLOduration=5.662650417 podStartE2EDuration="7.471745165s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="2025-03-17 18:00:22.794866811 +0000 UTC m=+5.451035807" lastFinishedPulling="2025-03-17 18:00:24.603961559 +0000 UTC m=+7.260130555" observedRunningTime="2025-03-17 18:00:25.471012149 +0000 UTC m=+8.127181245" watchObservedRunningTime="2025-03-17 18:00:25.471745165 +0000 UTC m=+8.127914261" Mar 17 18:00:25.518219 kubelet[2559]: E0317 18:00:25.518186 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.518219 kubelet[2559]: W0317 18:00:25.518216 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.518449 kubelet[2559]: E0317 18:00:25.518245 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.518496 kubelet[2559]: E0317 18:00:25.518480 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.518496 kubelet[2559]: W0317 18:00:25.518491 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.518586 kubelet[2559]: E0317 18:00:25.518510 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.518708 kubelet[2559]: E0317 18:00:25.518685 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.518708 kubelet[2559]: W0317 18:00:25.518701 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.518836 kubelet[2559]: E0317 18:00:25.518714 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.518994 kubelet[2559]: E0317 18:00:25.518970 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.518994 kubelet[2559]: W0317 18:00:25.518984 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.519192 kubelet[2559]: E0317 18:00:25.518997 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.519261 kubelet[2559]: E0317 18:00:25.519200 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.519261 kubelet[2559]: W0317 18:00:25.519213 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.519261 kubelet[2559]: E0317 18:00:25.519226 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.519504 kubelet[2559]: E0317 18:00:25.519408 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.519504 kubelet[2559]: W0317 18:00:25.519419 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.519504 kubelet[2559]: E0317 18:00:25.519431 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.519755 kubelet[2559]: E0317 18:00:25.519610 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.519755 kubelet[2559]: W0317 18:00:25.519620 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.519755 kubelet[2559]: E0317 18:00:25.519632 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.520029 kubelet[2559]: E0317 18:00:25.519823 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.520029 kubelet[2559]: W0317 18:00:25.519834 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.520029 kubelet[2559]: E0317 18:00:25.519865 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.520153 kubelet[2559]: E0317 18:00:25.520058 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.520153 kubelet[2559]: W0317 18:00:25.520068 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.520153 kubelet[2559]: E0317 18:00:25.520080 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.520283 kubelet[2559]: E0317 18:00:25.520249 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.520283 kubelet[2559]: W0317 18:00:25.520258 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.520283 kubelet[2559]: E0317 18:00:25.520269 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.520523 kubelet[2559]: E0317 18:00:25.520486 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.520523 kubelet[2559]: W0317 18:00:25.520522 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.520667 kubelet[2559]: E0317 18:00:25.520536 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.520854 kubelet[2559]: E0317 18:00:25.520835 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.520854 kubelet[2559]: W0317 18:00:25.520851 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.521449 kubelet[2559]: E0317 18:00:25.520865 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.521449 kubelet[2559]: E0317 18:00:25.521126 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.521449 kubelet[2559]: W0317 18:00:25.521138 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.521449 kubelet[2559]: E0317 18:00:25.521151 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.521449 kubelet[2559]: E0317 18:00:25.521324 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.521449 kubelet[2559]: W0317 18:00:25.521334 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.521449 kubelet[2559]: E0317 18:00:25.521347 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.521807 kubelet[2559]: E0317 18:00:25.521532 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.521807 kubelet[2559]: W0317 18:00:25.521542 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.521807 kubelet[2559]: E0317 18:00:25.521553 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.521807 kubelet[2559]: E0317 18:00:25.521779 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.521807 kubelet[2559]: W0317 18:00:25.521790 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.521807 kubelet[2559]: E0317 18:00:25.521802 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.522056 kubelet[2559]: E0317 18:00:25.521985 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.522056 kubelet[2559]: W0317 18:00:25.521995 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.522056 kubelet[2559]: E0317 18:00:25.522005 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.522190 kubelet[2559]: E0317 18:00:25.522161 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.522190 kubelet[2559]: W0317 18:00:25.522170 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.522190 kubelet[2559]: E0317 18:00:25.522182 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.522352 kubelet[2559]: E0317 18:00:25.522332 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.522352 kubelet[2559]: W0317 18:00:25.522345 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.522521 kubelet[2559]: E0317 18:00:25.522357 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.522664 kubelet[2559]: E0317 18:00:25.522639 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.522664 kubelet[2559]: W0317 18:00:25.522660 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.522788 kubelet[2559]: E0317 18:00:25.522690 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.531949 kubelet[2559]: E0317 18:00:25.531930 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.531949 kubelet[2559]: W0317 18:00:25.531945 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.532204 kubelet[2559]: E0317 18:00:25.531960 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.532204 kubelet[2559]: E0317 18:00:25.532186 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.532204 kubelet[2559]: W0317 18:00:25.532197 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.532425 kubelet[2559]: E0317 18:00:25.532217 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.532486 kubelet[2559]: E0317 18:00:25.532431 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.532486 kubelet[2559]: W0317 18:00:25.532442 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.532486 kubelet[2559]: E0317 18:00:25.532461 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.532708 kubelet[2559]: E0317 18:00:25.532667 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.532708 kubelet[2559]: W0317 18:00:25.532678 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.532708 kubelet[2559]: E0317 18:00:25.532696 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.532923 kubelet[2559]: E0317 18:00:25.532911 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.532923 kubelet[2559]: W0317 18:00:25.532922 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.533033 kubelet[2559]: E0317 18:00:25.532940 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.533167 kubelet[2559]: E0317 18:00:25.533153 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.533167 kubelet[2559]: W0317 18:00:25.533165 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.533250 kubelet[2559]: E0317 18:00:25.533182 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.533507 kubelet[2559]: E0317 18:00:25.533487 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.533507 kubelet[2559]: W0317 18:00:25.533502 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.533630 kubelet[2559]: E0317 18:00:25.533529 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.533762 kubelet[2559]: E0317 18:00:25.533745 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.533762 kubelet[2559]: W0317 18:00:25.533759 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.533865 kubelet[2559]: E0317 18:00:25.533785 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.534001 kubelet[2559]: E0317 18:00:25.533983 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.534001 kubelet[2559]: W0317 18:00:25.533997 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.534105 kubelet[2559]: E0317 18:00:25.534015 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.534266 kubelet[2559]: E0317 18:00:25.534252 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.534266 kubelet[2559]: W0317 18:00:25.534264 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.534361 kubelet[2559]: E0317 18:00:25.534282 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.534647 kubelet[2559]: E0317 18:00:25.534630 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.534647 kubelet[2559]: W0317 18:00:25.534643 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.534815 kubelet[2559]: E0317 18:00:25.534660 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:25.534910 kubelet[2559]: E0317 18:00:25.534890 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:25.534910 kubelet[2559]: W0317 18:00:25.534906 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:25.534981 kubelet[2559]: E0317 18:00:25.534920 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.367073 kubelet[2559]: E0317 18:00:26.367015 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:26.531698 kubelet[2559]: E0317 18:00:26.531507 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.531698 kubelet[2559]: W0317 18:00:26.531539 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.531698 kubelet[2559]: E0317 18:00:26.531569 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.532153 kubelet[2559]: E0317 18:00:26.531928 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.532153 kubelet[2559]: W0317 18:00:26.531945 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.532153 kubelet[2559]: E0317 18:00:26.531964 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.532331 kubelet[2559]: E0317 18:00:26.532199 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.532331 kubelet[2559]: W0317 18:00:26.532212 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.532331 kubelet[2559]: E0317 18:00:26.532228 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.532502 kubelet[2559]: E0317 18:00:26.532470 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.532502 kubelet[2559]: W0317 18:00:26.532481 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.532502 kubelet[2559]: E0317 18:00:26.532495 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.532838 kubelet[2559]: E0317 18:00:26.532801 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.532838 kubelet[2559]: W0317 18:00:26.532820 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.532838 kubelet[2559]: E0317 18:00:26.532836 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.533111 kubelet[2559]: E0317 18:00:26.533093 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.533111 kubelet[2559]: W0317 18:00:26.533108 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.533243 kubelet[2559]: E0317 18:00:26.533123 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.533467 kubelet[2559]: E0317 18:00:26.533388 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.533467 kubelet[2559]: W0317 18:00:26.533402 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.533467 kubelet[2559]: E0317 18:00:26.533418 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.533816 kubelet[2559]: E0317 18:00:26.533671 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.533816 kubelet[2559]: W0317 18:00:26.533687 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.533816 kubelet[2559]: E0317 18:00:26.533702 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.534145 kubelet[2559]: E0317 18:00:26.534036 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.534145 kubelet[2559]: W0317 18:00:26.534049 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.534145 kubelet[2559]: E0317 18:00:26.534065 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.534320 kubelet[2559]: E0317 18:00:26.534295 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.534320 kubelet[2559]: W0317 18:00:26.534307 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.534442 kubelet[2559]: E0317 18:00:26.534321 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.534554 kubelet[2559]: E0317 18:00:26.534536 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.534554 kubelet[2559]: W0317 18:00:26.534552 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.534949 kubelet[2559]: E0317 18:00:26.534566 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.534949 kubelet[2559]: E0317 18:00:26.534879 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.534949 kubelet[2559]: W0317 18:00:26.534893 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.534949 kubelet[2559]: E0317 18:00:26.534909 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.535209 kubelet[2559]: E0317 18:00:26.535184 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.535209 kubelet[2559]: W0317 18:00:26.535204 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.535348 kubelet[2559]: E0317 18:00:26.535219 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.535460 kubelet[2559]: E0317 18:00:26.535438 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.535460 kubelet[2559]: W0317 18:00:26.535454 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.535816 kubelet[2559]: E0317 18:00:26.535470 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.535816 kubelet[2559]: E0317 18:00:26.535776 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.535816 kubelet[2559]: W0317 18:00:26.535791 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.535816 kubelet[2559]: E0317 18:00:26.535807 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.536070 kubelet[2559]: E0317 18:00:26.536048 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.536070 kubelet[2559]: W0317 18:00:26.536061 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.536176 kubelet[2559]: E0317 18:00:26.536076 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.536339 kubelet[2559]: E0317 18:00:26.536318 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.536339 kubelet[2559]: W0317 18:00:26.536334 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.536483 kubelet[2559]: E0317 18:00:26.536349 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.536800 kubelet[2559]: E0317 18:00:26.536764 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.536800 kubelet[2559]: W0317 18:00:26.536780 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.537037 kubelet[2559]: E0317 18:00:26.536797 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.537202 kubelet[2559]: E0317 18:00:26.537153 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.537202 kubelet[2559]: W0317 18:00:26.537186 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.537337 kubelet[2559]: E0317 18:00:26.537204 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.537623 kubelet[2559]: E0317 18:00:26.537561 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.537623 kubelet[2559]: W0317 18:00:26.537591 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.537623 kubelet[2559]: E0317 18:00:26.537609 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.539758 kubelet[2559]: E0317 18:00:26.539734 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.539758 kubelet[2559]: W0317 18:00:26.539754 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.539911 kubelet[2559]: E0317 18:00:26.539771 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.540098 kubelet[2559]: E0317 18:00:26.540078 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.540098 kubelet[2559]: W0317 18:00:26.540094 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.540228 kubelet[2559]: E0317 18:00:26.540142 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.540452 kubelet[2559]: E0317 18:00:26.540432 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.540538 kubelet[2559]: W0317 18:00:26.540456 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.540538 kubelet[2559]: E0317 18:00:26.540490 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.540817 kubelet[2559]: E0317 18:00:26.540797 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.540817 kubelet[2559]: W0317 18:00:26.540813 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.540950 kubelet[2559]: E0317 18:00:26.540844 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.541103 kubelet[2559]: E0317 18:00:26.541085 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.541103 kubelet[2559]: W0317 18:00:26.541100 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.541297 kubelet[2559]: E0317 18:00:26.541121 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.541498 kubelet[2559]: E0317 18:00:26.541441 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.541498 kubelet[2559]: W0317 18:00:26.541459 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.541498 kubelet[2559]: E0317 18:00:26.541492 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.542051 kubelet[2559]: E0317 18:00:26.542034 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.542051 kubelet[2559]: W0317 18:00:26.542047 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.542176 kubelet[2559]: E0317 18:00:26.542065 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.542315 kubelet[2559]: E0317 18:00:26.542297 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.542315 kubelet[2559]: W0317 18:00:26.542311 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.542420 kubelet[2559]: E0317 18:00:26.542338 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.542551 kubelet[2559]: E0317 18:00:26.542535 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.542551 kubelet[2559]: W0317 18:00:26.542548 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.542683 kubelet[2559]: E0317 18:00:26.542568 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.542885 kubelet[2559]: E0317 18:00:26.542833 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.542885 kubelet[2559]: W0317 18:00:26.542845 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.542885 kubelet[2559]: E0317 18:00:26.542867 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.543316 kubelet[2559]: E0317 18:00:26.543299 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.543316 kubelet[2559]: W0317 18:00:26.543313 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.543563 kubelet[2559]: E0317 18:00:26.543330 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:26.543606 kubelet[2559]: E0317 18:00:26.543567 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:00:26.543606 kubelet[2559]: W0317 18:00:26.543579 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:00:26.543669 kubelet[2559]: E0317 18:00:26.543609 2559 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:00:27.177131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484999418.mount: Deactivated successfully. Mar 17 18:00:27.335964 containerd[1736]: time="2025-03-17T18:00:27.335904538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:27.338424 containerd[1736]: time="2025-03-17T18:00:27.338360693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 18:00:27.342417 containerd[1736]: time="2025-03-17T18:00:27.342359682Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:27.346759 containerd[1736]: time="2025-03-17T18:00:27.346695178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:27.347465 containerd[1736]: time="2025-03-17T18:00:27.347317492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.742828922s" Mar 17 18:00:27.347465 containerd[1736]: time="2025-03-17T18:00:27.347356993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 18:00:27.349474 containerd[1736]: time="2025-03-17T18:00:27.349445439Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:00:27.367601 kubelet[2559]: E0317 18:00:27.367555 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:27.384742 containerd[1736]: time="2025-03-17T18:00:27.384695724Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94\"" Mar 17 18:00:27.385395 containerd[1736]: time="2025-03-17T18:00:27.385300537Z" level=info msg="StartContainer for \"8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94\"" Mar 17 18:00:27.417860 systemd[1]: Started cri-containerd-8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94.scope - libcontainer container 8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94. Mar 17 18:00:27.436079 kubelet[2559]: E0317 18:00:27.435520 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:27.450261 containerd[1736]: time="2025-03-17T18:00:27.450200581Z" level=info msg="StartContainer for \"8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94\" returns successfully" Mar 17 18:00:27.458002 systemd[1]: cri-containerd-8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94.scope: Deactivated successfully. Mar 17 18:00:28.140368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94-rootfs.mount: Deactivated successfully. Mar 17 18:00:28.368709 kubelet[2559]: E0317 18:00:28.368648 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:28.385559 containerd[1736]: time="2025-03-17T18:00:28.385487289Z" level=info msg="shim disconnected" id=8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94 namespace=k8s.io Mar 17 18:00:28.385559 containerd[1736]: time="2025-03-17T18:00:28.385552890Z" level=warning msg="cleaning up after shim disconnected" id=8ca128c945ce8dc09eb966b668e572f5acd2c0678e6b63e46fd7c5fb5fd8de94 namespace=k8s.io Mar 17 18:00:28.385559 containerd[1736]: time="2025-03-17T18:00:28.385563091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:28.469977 containerd[1736]: time="2025-03-17T18:00:28.469409456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 18:00:29.369269 kubelet[2559]: E0317 18:00:29.369213 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:29.435078 kubelet[2559]: E0317 18:00:29.434999 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:30.370197 kubelet[2559]: E0317 18:00:30.370159 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:31.370555 kubelet[2559]: E0317 18:00:31.370505 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:31.435476 kubelet[2559]: E0317 18:00:31.435258 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:32.370961 kubelet[2559]: E0317 18:00:32.370914 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:32.564184 containerd[1736]: time="2025-03-17T18:00:32.564127283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:32.567633 containerd[1736]: time="2025-03-17T18:00:32.567572060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 18:00:32.570815 containerd[1736]: time="2025-03-17T18:00:32.570762231Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:32.574849 containerd[1736]: time="2025-03-17T18:00:32.574665417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:32.575437 containerd[1736]: time="2025-03-17T18:00:32.575404234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.105944877s" Mar 17 18:00:32.575437 containerd[1736]: time="2025-03-17T18:00:32.575434335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 18:00:32.577646 containerd[1736]: time="2025-03-17T18:00:32.577592383Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:00:32.618408 containerd[1736]: time="2025-03-17T18:00:32.618371390Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141\"" Mar 17 18:00:32.618922 containerd[1736]: time="2025-03-17T18:00:32.618846201Z" level=info msg="StartContainer for \"4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141\"" Mar 17 18:00:32.650863 systemd[1]: Started cri-containerd-4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141.scope - libcontainer container 4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141. Mar 17 18:00:32.680565 containerd[1736]: time="2025-03-17T18:00:32.680502074Z" level=info msg="StartContainer for \"4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141\" returns successfully" Mar 17 18:00:33.371705 kubelet[2559]: E0317 18:00:33.371594 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:33.435481 kubelet[2559]: E0317 18:00:33.435439 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:34.044595 systemd[1]: cri-containerd-4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141.scope: Deactivated successfully. Mar 17 18:00:34.045208 systemd[1]: cri-containerd-4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141.scope: Consumed 490ms CPU time, 175.6M memory peak, 154M written to disk. Mar 17 18:00:34.067153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141-rootfs.mount: Deactivated successfully. Mar 17 18:00:34.142621 kubelet[2559]: I0317 18:00:34.142319 2559 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:00:34.614455 kubelet[2559]: E0317 18:00:34.372549 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:35.373629 kubelet[2559]: E0317 18:00:35.373548 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:35.720299 systemd[1]: Created slice kubepods-besteffort-poda9dadf48_23d5_4e7c_a2b0_a9c1e1747298.slice - libcontainer container kubepods-besteffort-poda9dadf48_23d5_4e7c_a2b0_a9c1e1747298.slice. Mar 17 18:00:35.722811 containerd[1736]: time="2025-03-17T18:00:35.722771197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:0,}" Mar 17 18:00:35.738346 containerd[1736]: time="2025-03-17T18:00:35.738267342Z" level=info msg="shim disconnected" id=4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141 namespace=k8s.io Mar 17 18:00:35.738346 containerd[1736]: time="2025-03-17T18:00:35.738322944Z" level=warning msg="cleaning up after shim disconnected" id=4840c4724883b8e5857702e74889d05c1009408ad7df6e4363b7f21b7d8d3141 namespace=k8s.io Mar 17 18:00:35.738346 containerd[1736]: time="2025-03-17T18:00:35.738335444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:35.822308 containerd[1736]: time="2025-03-17T18:00:35.822260012Z" level=error msg="Failed to destroy network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:35.825305 containerd[1736]: time="2025-03-17T18:00:35.822631920Z" level=error msg="encountered an error cleaning up failed sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:35.825305 containerd[1736]: time="2025-03-17T18:00:35.822712622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:35.825469 kubelet[2559]: E0317 18:00:35.824874 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:35.825469 kubelet[2559]: E0317 18:00:35.824964 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:35.825469 kubelet[2559]: E0317 18:00:35.824985 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:35.824496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6-shm.mount: Deactivated successfully. Mar 17 18:00:35.825963 kubelet[2559]: E0317 18:00:35.825033 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:36.374652 kubelet[2559]: E0317 18:00:36.374585 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:36.484831 kubelet[2559]: I0317 18:00:36.484304 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6" Mar 17 18:00:36.485621 containerd[1736]: time="2025-03-17T18:00:36.485217270Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:36.485621 containerd[1736]: time="2025-03-17T18:00:36.485463276Z" level=info msg="Ensure that sandbox 759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6 in task-service has been cleanup successfully" Mar 17 18:00:36.485930 containerd[1736]: time="2025-03-17T18:00:36.485841984Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:36.485930 containerd[1736]: time="2025-03-17T18:00:36.485865885Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:36.489680 containerd[1736]: time="2025-03-17T18:00:36.488539244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:1,}" Mar 17 18:00:36.489680 containerd[1736]: time="2025-03-17T18:00:36.489483165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 18:00:36.489206 systemd[1]: run-netns-cni\x2da04497e2\x2dcea4\x2d1f54\x2def4d\x2dbb3f7218bafa.mount: Deactivated successfully. Mar 17 18:00:36.575399 containerd[1736]: time="2025-03-17T18:00:36.575347577Z" level=error msg="Failed to destroy network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:36.575685 containerd[1736]: time="2025-03-17T18:00:36.575655084Z" level=error msg="encountered an error cleaning up failed sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:36.575779 containerd[1736]: time="2025-03-17T18:00:36.575736985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:36.576016 kubelet[2559]: E0317 18:00:36.575981 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:36.576118 kubelet[2559]: E0317 18:00:36.576047 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:36.576118 kubelet[2559]: E0317 18:00:36.576075 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:36.576210 kubelet[2559]: E0317 18:00:36.576128 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:36.766866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34-shm.mount: Deactivated successfully. Mar 17 18:00:37.375368 kubelet[2559]: E0317 18:00:37.375301 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:37.491234 kubelet[2559]: I0317 18:00:37.491202 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34" Mar 17 18:00:37.492115 containerd[1736]: time="2025-03-17T18:00:37.491795078Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:37.492115 containerd[1736]: time="2025-03-17T18:00:37.492002082Z" level=info msg="Ensure that sandbox 3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34 in task-service has been cleanup successfully" Mar 17 18:00:37.493878 containerd[1736]: time="2025-03-17T18:00:37.493844723Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:37.493878 containerd[1736]: time="2025-03-17T18:00:37.493876424Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:37.494873 containerd[1736]: time="2025-03-17T18:00:37.494844846Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:37.494964 containerd[1736]: time="2025-03-17T18:00:37.494943748Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:37.494964 containerd[1736]: time="2025-03-17T18:00:37.494958948Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:37.495460 containerd[1736]: time="2025-03-17T18:00:37.495363857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:2,}" Mar 17 18:00:37.495885 systemd[1]: run-netns-cni\x2d0d1b824b\x2d9e34\x2dcbf6\x2d6703\x2d609ee6b47a13.mount: Deactivated successfully. Mar 17 18:00:37.598253 containerd[1736]: time="2025-03-17T18:00:37.598205247Z" level=error msg="Failed to destroy network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:37.598949 containerd[1736]: time="2025-03-17T18:00:37.598764259Z" level=error msg="encountered an error cleaning up failed sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:37.598949 containerd[1736]: time="2025-03-17T18:00:37.598877962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:37.599604 kubelet[2559]: E0317 18:00:37.599406 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:37.599604 kubelet[2559]: E0317 18:00:37.599496 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:37.599604 kubelet[2559]: E0317 18:00:37.599543 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:37.600183 kubelet[2559]: E0317 18:00:37.599866 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:37.601908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a-shm.mount: Deactivated successfully. Mar 17 18:00:38.360751 kubelet[2559]: E0317 18:00:38.360370 2559 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:38.376242 kubelet[2559]: E0317 18:00:38.376181 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:38.499642 kubelet[2559]: I0317 18:00:38.498866 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a" Mar 17 18:00:38.500925 containerd[1736]: time="2025-03-17T18:00:38.500863711Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:00:38.501342 containerd[1736]: time="2025-03-17T18:00:38.501101517Z" level=info msg="Ensure that sandbox 0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a in task-service has been cleanup successfully" Mar 17 18:00:38.503481 containerd[1736]: time="2025-03-17T18:00:38.503450773Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:00:38.503481 containerd[1736]: time="2025-03-17T18:00:38.503481674Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:00:38.503874 systemd[1]: run-netns-cni\x2d96c428ed\x2d3a01\x2d4234\x2d5edd\x2dc98fdc7f2d88.mount: Deactivated successfully. Mar 17 18:00:38.504530 containerd[1736]: time="2025-03-17T18:00:38.503936985Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:38.504530 containerd[1736]: time="2025-03-17T18:00:38.504022387Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:38.504530 containerd[1736]: time="2025-03-17T18:00:38.504068188Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:38.505527 containerd[1736]: time="2025-03-17T18:00:38.504985410Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:38.505527 containerd[1736]: time="2025-03-17T18:00:38.505087412Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:38.505527 containerd[1736]: time="2025-03-17T18:00:38.505112413Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:38.505942 containerd[1736]: time="2025-03-17T18:00:38.505911332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:3,}" Mar 17 18:00:39.376974 kubelet[2559]: E0317 18:00:39.376921 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:39.617226 containerd[1736]: time="2025-03-17T18:00:39.617160310Z" level=error msg="Failed to destroy network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:39.618366 containerd[1736]: time="2025-03-17T18:00:39.618159934Z" level=error msg="encountered an error cleaning up failed sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:39.618366 containerd[1736]: time="2025-03-17T18:00:39.618253836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:39.618782 kubelet[2559]: E0317 18:00:39.618703 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:39.618883 kubelet[2559]: E0317 18:00:39.618795 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:39.618883 kubelet[2559]: E0317 18:00:39.618821 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:39.618960 kubelet[2559]: E0317 18:00:39.618878 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:39.621698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036-shm.mount: Deactivated successfully. Mar 17 18:00:39.929385 systemd[1]: Created slice kubepods-besteffort-pod695e75fe_b9ea_4ba5_8430_db659e813a92.slice - libcontainer container kubepods-besteffort-pod695e75fe_b9ea_4ba5_8430_db659e813a92.slice. Mar 17 18:00:40.046395 kubelet[2559]: I0317 18:00:40.046304 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfcc4\" (UniqueName: \"kubernetes.io/projected/695e75fe-b9ea-4ba5-8430-db659e813a92-kube-api-access-zfcc4\") pod \"nginx-deployment-8587fbcb89-n8x4g\" (UID: \"695e75fe-b9ea-4ba5-8430-db659e813a92\") " pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:40.235391 containerd[1736]: time="2025-03-17T18:00:40.235270448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:0,}" Mar 17 18:00:40.377858 kubelet[2559]: E0317 18:00:40.377803 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:40.379867 containerd[1736]: time="2025-03-17T18:00:40.379707616Z" level=error msg="Failed to destroy network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.380647 containerd[1736]: time="2025-03-17T18:00:40.380510535Z" level=error msg="encountered an error cleaning up failed sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.380647 containerd[1736]: time="2025-03-17T18:00:40.380589337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.381594 kubelet[2559]: E0317 18:00:40.381126 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.381594 kubelet[2559]: E0317 18:00:40.381215 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:40.381594 kubelet[2559]: E0317 18:00:40.381259 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:40.381803 kubelet[2559]: E0317 18:00:40.381320 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-n8x4g" podUID="695e75fe-b9ea-4ba5-8430-db659e813a92" Mar 17 18:00:40.515602 kubelet[2559]: I0317 18:00:40.515536 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143" Mar 17 18:00:40.517541 containerd[1736]: time="2025-03-17T18:00:40.517500524Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:00:40.518092 containerd[1736]: time="2025-03-17T18:00:40.517940734Z" level=info msg="Ensure that sandbox 1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143 in task-service has been cleanup successfully" Mar 17 18:00:40.518751 containerd[1736]: time="2025-03-17T18:00:40.518655052Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:00:40.518751 containerd[1736]: time="2025-03-17T18:00:40.518680552Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:00:40.519602 containerd[1736]: time="2025-03-17T18:00:40.519570474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:1,}" Mar 17 18:00:40.520053 kubelet[2559]: I0317 18:00:40.520030 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036" Mar 17 18:00:40.520985 containerd[1736]: time="2025-03-17T18:00:40.520959907Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:00:40.521182 containerd[1736]: time="2025-03-17T18:00:40.521158512Z" level=info msg="Ensure that sandbox 780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036 in task-service has been cleanup successfully" Mar 17 18:00:40.521334 containerd[1736]: time="2025-03-17T18:00:40.521312015Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:00:40.521492 containerd[1736]: time="2025-03-17T18:00:40.521437218Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:00:40.524580 containerd[1736]: time="2025-03-17T18:00:40.523357164Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:00:40.524580 containerd[1736]: time="2025-03-17T18:00:40.523449167Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:00:40.524580 containerd[1736]: time="2025-03-17T18:00:40.523462567Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:00:40.525557 containerd[1736]: time="2025-03-17T18:00:40.525429714Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:40.525557 containerd[1736]: time="2025-03-17T18:00:40.525522616Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:40.525557 containerd[1736]: time="2025-03-17T18:00:40.525535017Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:40.526792 containerd[1736]: time="2025-03-17T18:00:40.526381337Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:40.526792 containerd[1736]: time="2025-03-17T18:00:40.526468239Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:40.526792 containerd[1736]: time="2025-03-17T18:00:40.526481139Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:40.528121 containerd[1736]: time="2025-03-17T18:00:40.527792671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:4,}" Mar 17 18:00:40.527945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143-shm.mount: Deactivated successfully. Mar 17 18:00:40.528079 systemd[1]: run-netns-cni\x2dbb0dd46b\x2d38c8\x2dee98\x2d04d8\x2d7475e7c36026.mount: Deactivated successfully. Mar 17 18:00:40.705934 containerd[1736]: time="2025-03-17T18:00:40.705877446Z" level=error msg="Failed to destroy network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.707319 containerd[1736]: time="2025-03-17T18:00:40.707021274Z" level=error msg="encountered an error cleaning up failed sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.707319 containerd[1736]: time="2025-03-17T18:00:40.707105476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.707500 kubelet[2559]: E0317 18:00:40.707376 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.707500 kubelet[2559]: E0317 18:00:40.707440 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:40.707500 kubelet[2559]: E0317 18:00:40.707467 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:40.708148 kubelet[2559]: E0317 18:00:40.707571 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:40.721154 containerd[1736]: time="2025-03-17T18:00:40.721018110Z" level=error msg="Failed to destroy network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.721910 containerd[1736]: time="2025-03-17T18:00:40.721635824Z" level=error msg="encountered an error cleaning up failed sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.721910 containerd[1736]: time="2025-03-17T18:00:40.721732527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.722091 kubelet[2559]: E0317 18:00:40.721985 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:40.722091 kubelet[2559]: E0317 18:00:40.722042 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:40.722091 kubelet[2559]: E0317 18:00:40.722068 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:40.722221 kubelet[2559]: E0317 18:00:40.722116 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-n8x4g" podUID="695e75fe-b9ea-4ba5-8430-db659e813a92" Mar 17 18:00:41.378828 kubelet[2559]: E0317 18:00:41.378781 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:41.528689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc-shm.mount: Deactivated successfully. Mar 17 18:00:41.529233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf-shm.mount: Deactivated successfully. Mar 17 18:00:41.532001 kubelet[2559]: I0317 18:00:41.531511 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc" Mar 17 18:00:41.533398 containerd[1736]: time="2025-03-17T18:00:41.533167107Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:00:41.533910 containerd[1736]: time="2025-03-17T18:00:41.533800722Z" level=info msg="Ensure that sandbox c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc in task-service has been cleanup successfully" Mar 17 18:00:41.534591 containerd[1736]: time="2025-03-17T18:00:41.534436737Z" level=info msg="TearDown network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" successfully" Mar 17 18:00:41.534591 containerd[1736]: time="2025-03-17T18:00:41.534468338Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" returns successfully" Mar 17 18:00:41.537257 systemd[1]: run-netns-cni\x2d22ee9f55\x2da91c\x2d713e\x2d0033\x2d80db9f656a07.mount: Deactivated successfully. Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.539268053Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.539360355Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.539373556Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.540004471Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.540099073Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:00:41.540164 containerd[1736]: time="2025-03-17T18:00:41.540115273Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:00:41.540428 kubelet[2559]: I0317 18:00:41.540307 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf" Mar 17 18:00:41.541496 containerd[1736]: time="2025-03-17T18:00:41.541472206Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:00:41.541939 containerd[1736]: time="2025-03-17T18:00:41.541915717Z" level=info msg="Ensure that sandbox c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf in task-service has been cleanup successfully" Mar 17 18:00:41.545005 containerd[1736]: time="2025-03-17T18:00:41.542241124Z" level=info msg="TearDown network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" successfully" Mar 17 18:00:41.545005 containerd[1736]: time="2025-03-17T18:00:41.542272425Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" returns successfully" Mar 17 18:00:41.545005 containerd[1736]: time="2025-03-17T18:00:41.542368728Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:41.545005 containerd[1736]: time="2025-03-17T18:00:41.542449429Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:41.545005 containerd[1736]: time="2025-03-17T18:00:41.542461530Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:41.544636 systemd[1]: run-netns-cni\x2d16edb3f9\x2d3214\x2de1b9\x2d6bc0\x2d75877b4da30c.mount: Deactivated successfully. Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547701156Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547809858Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547832659Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547910161Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547981162Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:41.548213 containerd[1736]: time="2025-03-17T18:00:41.547993863Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:41.549658 containerd[1736]: time="2025-03-17T18:00:41.549410697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:2,}" Mar 17 18:00:41.554346 containerd[1736]: time="2025-03-17T18:00:41.549610001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:5,}" Mar 17 18:00:41.712791 containerd[1736]: time="2025-03-17T18:00:41.711317283Z" level=error msg="Failed to destroy network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.713746 containerd[1736]: time="2025-03-17T18:00:41.713610839Z" level=error msg="encountered an error cleaning up failed sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.713746 containerd[1736]: time="2025-03-17T18:00:41.713699641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.715288 kubelet[2559]: E0317 18:00:41.714850 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.715288 kubelet[2559]: E0317 18:00:41.714930 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:41.715288 kubelet[2559]: E0317 18:00:41.714962 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:41.715525 kubelet[2559]: E0317 18:00:41.715039 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-n8x4g" podUID="695e75fe-b9ea-4ba5-8430-db659e813a92" Mar 17 18:00:41.755018 containerd[1736]: time="2025-03-17T18:00:41.754903930Z" level=error msg="Failed to destroy network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.755316 containerd[1736]: time="2025-03-17T18:00:41.755229638Z" level=error msg="encountered an error cleaning up failed sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.755316 containerd[1736]: time="2025-03-17T18:00:41.755301739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.756404 kubelet[2559]: E0317 18:00:41.755936 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:41.756404 kubelet[2559]: E0317 18:00:41.756010 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:41.756404 kubelet[2559]: E0317 18:00:41.756038 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:41.756644 kubelet[2559]: E0317 18:00:41.756098 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:42.380001 kubelet[2559]: E0317 18:00:42.379955 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:42.529362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c-shm.mount: Deactivated successfully. Mar 17 18:00:42.547751 kubelet[2559]: I0317 18:00:42.547340 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f" Mar 17 18:00:42.548890 containerd[1736]: time="2025-03-17T18:00:42.548853490Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" Mar 17 18:00:42.549097 containerd[1736]: time="2025-03-17T18:00:42.549065395Z" level=info msg="Ensure that sandbox 9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f in task-service has been cleanup successfully" Mar 17 18:00:42.551618 systemd[1]: run-netns-cni\x2d7baaa1f3\x2dc6fb\x2de3a4\x2dde21\x2d798a68516f2a.mount: Deactivated successfully. Mar 17 18:00:42.551844 containerd[1736]: time="2025-03-17T18:00:42.551774760Z" level=info msg="TearDown network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" successfully" Mar 17 18:00:42.551844 containerd[1736]: time="2025-03-17T18:00:42.551798961Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" returns successfully" Mar 17 18:00:42.552279 containerd[1736]: time="2025-03-17T18:00:42.552034666Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:00:42.552279 containerd[1736]: time="2025-03-17T18:00:42.552123168Z" level=info msg="TearDown network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" successfully" Mar 17 18:00:42.552279 containerd[1736]: time="2025-03-17T18:00:42.552139169Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" returns successfully" Mar 17 18:00:42.554441 containerd[1736]: time="2025-03-17T18:00:42.553360298Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:00:42.554441 containerd[1736]: time="2025-03-17T18:00:42.553480901Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:00:42.554441 containerd[1736]: time="2025-03-17T18:00:42.553545403Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:00:42.554643 containerd[1736]: time="2025-03-17T18:00:42.554585528Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:00:42.554799 containerd[1736]: time="2025-03-17T18:00:42.554728631Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:00:42.554799 containerd[1736]: time="2025-03-17T18:00:42.554746231Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:00:42.555482 containerd[1736]: time="2025-03-17T18:00:42.555152341Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:42.556013 containerd[1736]: time="2025-03-17T18:00:42.555348746Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:42.556013 containerd[1736]: time="2025-03-17T18:00:42.555796757Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:42.557119 kubelet[2559]: I0317 18:00:42.556295 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c" Mar 17 18:00:42.557212 containerd[1736]: time="2025-03-17T18:00:42.556360570Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:42.557212 containerd[1736]: time="2025-03-17T18:00:42.556440772Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:42.557212 containerd[1736]: time="2025-03-17T18:00:42.556455072Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:42.557487 containerd[1736]: time="2025-03-17T18:00:42.557463997Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" Mar 17 18:00:42.557653 containerd[1736]: time="2025-03-17T18:00:42.557632601Z" level=info msg="Ensure that sandbox 970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c in task-service has been cleanup successfully" Mar 17 18:00:42.557881 containerd[1736]: time="2025-03-17T18:00:42.557801405Z" level=info msg="TearDown network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" successfully" Mar 17 18:00:42.557881 containerd[1736]: time="2025-03-17T18:00:42.557823005Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" returns successfully" Mar 17 18:00:42.559651 containerd[1736]: time="2025-03-17T18:00:42.558426620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:6,}" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561052683Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561185086Z" level=info msg="TearDown network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" successfully" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561201986Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" returns successfully" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561562595Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561639997Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:00:42.562409 containerd[1736]: time="2025-03-17T18:00:42.561652997Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:00:42.561515 systemd[1]: run-netns-cni\x2d7a7c0bf8\x2deecb\x2d48a1\x2d7991\x2d825ad25b0479.mount: Deactivated successfully. Mar 17 18:00:42.563881 containerd[1736]: time="2025-03-17T18:00:42.563855650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:3,}" Mar 17 18:00:42.587585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955581068.mount: Deactivated successfully. Mar 17 18:00:42.674178 containerd[1736]: time="2025-03-17T18:00:42.674051596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:42.688077 containerd[1736]: time="2025-03-17T18:00:42.687918128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 18:00:42.694083 containerd[1736]: time="2025-03-17T18:00:42.693889372Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:42.703745 containerd[1736]: time="2025-03-17T18:00:42.703640406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:42.704637 containerd[1736]: time="2025-03-17T18:00:42.704553328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.215041361s" Mar 17 18:00:42.704637 containerd[1736]: time="2025-03-17T18:00:42.704597529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 18:00:42.719595 containerd[1736]: time="2025-03-17T18:00:42.719543688Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:00:42.740652 containerd[1736]: time="2025-03-17T18:00:42.740601593Z" level=error msg="Failed to destroy network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.741457 containerd[1736]: time="2025-03-17T18:00:42.741411113Z" level=error msg="encountered an error cleaning up failed sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.741683 containerd[1736]: time="2025-03-17T18:00:42.741637018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.742128 kubelet[2559]: E0317 18:00:42.742077 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.742236 kubelet[2559]: E0317 18:00:42.742152 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:42.742236 kubelet[2559]: E0317 18:00:42.742184 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fk9nq" Mar 17 18:00:42.742328 kubelet[2559]: E0317 18:00:42.742238 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fk9nq_calico-system(a9dadf48-23d5-4e7c-a2b0-a9c1e1747298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fk9nq" podUID="a9dadf48-23d5-4e7c-a2b0-a9c1e1747298" Mar 17 18:00:42.760791 containerd[1736]: time="2025-03-17T18:00:42.760733676Z" level=error msg="Failed to destroy network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.761753 containerd[1736]: time="2025-03-17T18:00:42.761222188Z" level=error msg="encountered an error cleaning up failed sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.761753 containerd[1736]: time="2025-03-17T18:00:42.761298990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.761952 kubelet[2559]: E0317 18:00:42.761516 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:00:42.761952 kubelet[2559]: E0317 18:00:42.761578 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:42.761952 kubelet[2559]: E0317 18:00:42.761607 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-n8x4g" Mar 17 18:00:42.762096 kubelet[2559]: E0317 18:00:42.761658 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-n8x4g_default(695e75fe-b9ea-4ba5-8430-db659e813a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-n8x4g" podUID="695e75fe-b9ea-4ba5-8430-db659e813a92" Mar 17 18:00:42.778813 containerd[1736]: time="2025-03-17T18:00:42.778756709Z" level=info msg="CreateContainer within sandbox \"7fb9664f15aba3dd7da08c2b3c854829fd215fd663412c0ac53fb68458cc8e31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10996cf765f7c20d018ab758b7ff9593136c5cd04d40ff6e5e4b8846846f4fbf\"" Mar 17 18:00:42.779434 containerd[1736]: time="2025-03-17T18:00:42.779401825Z" level=info msg="StartContainer for \"10996cf765f7c20d018ab758b7ff9593136c5cd04d40ff6e5e4b8846846f4fbf\"" Mar 17 18:00:42.803892 systemd[1]: Started cri-containerd-10996cf765f7c20d018ab758b7ff9593136c5cd04d40ff6e5e4b8846846f4fbf.scope - libcontainer container 10996cf765f7c20d018ab758b7ff9593136c5cd04d40ff6e5e4b8846846f4fbf. Mar 17 18:00:42.838201 containerd[1736]: time="2025-03-17T18:00:42.838017532Z" level=info msg="StartContainer for \"10996cf765f7c20d018ab758b7ff9593136c5cd04d40ff6e5e4b8846846f4fbf\" returns successfully" Mar 17 18:00:43.139909 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:00:43.140037 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Mar 17 18:00:43.380931 kubelet[2559]: E0317 18:00:43.380853 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:43.533519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4-shm.mount: Deactivated successfully. Mar 17 18:00:43.560060 kubelet[2559]: I0317 18:00:43.559867 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad" Mar 17 18:00:43.561340 containerd[1736]: time="2025-03-17T18:00:43.561293895Z" level=info msg="StopPodSandbox for \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\"" Mar 17 18:00:43.562344 containerd[1736]: time="2025-03-17T18:00:43.562307320Z" level=info msg="Ensure that sandbox 0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad in task-service has been cleanup successfully" Mar 17 18:00:43.564903 containerd[1736]: time="2025-03-17T18:00:43.564798279Z" level=info msg="TearDown network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" successfully" Mar 17 18:00:43.564903 containerd[1736]: time="2025-03-17T18:00:43.564824880Z" level=info msg="StopPodSandbox for \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" returns successfully" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.567247638Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.567358841Z" level=info msg="TearDown network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" successfully" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.567407142Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" returns successfully" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.569160584Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.569260387Z" level=info msg="TearDown network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" successfully" Mar 17 18:00:43.570201 containerd[1736]: time="2025-03-17T18:00:43.569282887Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" returns successfully" Mar 17 18:00:43.569665 systemd[1]: run-netns-cni\x2d87a0ef6d\x2d7707\x2d5cf8\x2d5d89\x2d78cb841efd7b.mount: Deactivated successfully. Mar 17 18:00:43.570578 containerd[1736]: time="2025-03-17T18:00:43.570263811Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:00:43.570578 containerd[1736]: time="2025-03-17T18:00:43.570360913Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:00:43.570578 containerd[1736]: time="2025-03-17T18:00:43.570384213Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:00:43.571430 containerd[1736]: time="2025-03-17T18:00:43.571178133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:4,}" Mar 17 18:00:43.579416 kubelet[2559]: I0317 18:00:43.579391 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4" Mar 17 18:00:43.580403 containerd[1736]: time="2025-03-17T18:00:43.580377453Z" level=info msg="StopPodSandbox for \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\"" Mar 17 18:00:43.580606 containerd[1736]: time="2025-03-17T18:00:43.580582858Z" level=info msg="Ensure that sandbox d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4 in task-service has been cleanup successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581259675Z" level=info msg="TearDown network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581281175Z" level=info msg="StopPodSandbox for \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581560182Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581648384Z" level=info msg="TearDown network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581661884Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.581978192Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582057494Z" level=info msg="TearDown network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582070494Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582300900Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582381902Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582394502Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582794411Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582873713Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.582886814Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583192521Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583281223Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583295523Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583524529Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583605431Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.583618331Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:00:43.584537 containerd[1736]: time="2025-03-17T18:00:43.584176045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:7,}" Mar 17 18:00:43.586468 systemd[1]: run-netns-cni\x2d4bacc6c3\x2dfe3e\x2d7178\x2d6d36\x2d817616604d62.mount: Deactivated successfully. Mar 17 18:00:43.592068 kubelet[2559]: I0317 18:00:43.591912 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-97t4f" podStartSLOduration=5.682649715 podStartE2EDuration="25.59189093s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="2025-03-17 18:00:22.799992225 +0000 UTC m=+5.456161321" lastFinishedPulling="2025-03-17 18:00:42.70923344 +0000 UTC m=+25.365402536" observedRunningTime="2025-03-17 18:00:43.589860481 +0000 UTC m=+26.246029477" watchObservedRunningTime="2025-03-17 18:00:43.59189093 +0000 UTC m=+26.248060026" Mar 17 18:00:43.778335 systemd-networkd[1521]: cali0f092009356: Link UP Mar 17 18:00:43.779334 systemd-networkd[1521]: cali0f092009356: Gained carrier Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.684 [INFO][3501] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.697 [INFO][3501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0 nginx-deployment-8587fbcb89- default 695e75fe-b9ea-4ba5-8430-db659e813a92 1483 0 2025-03-17 18:00:39 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.30 nginx-deployment-8587fbcb89-n8x4g eth0 default [] [] [kns.default ksa.default.default] cali0f092009356 [] []}} ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.697 [INFO][3501] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.733 [INFO][3524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" HandleID="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Workload="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.745 [INFO][3524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" HandleID="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Workload="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031eae0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.30", "pod":"nginx-deployment-8587fbcb89-n8x4g", "timestamp":"2025-03-17 18:00:43.733150321 +0000 UTC"}, Hostname:"10.200.4.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.745 [INFO][3524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.745 [INFO][3524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.746 [INFO][3524] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.30' Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.747 [INFO][3524] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.750 [INFO][3524] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.754 [INFO][3524] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.756 [INFO][3524] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.757 [INFO][3524] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.757 [INFO][3524] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.759 [INFO][3524] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584 Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.764 [INFO][3524] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3524] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.65/26] block=192.168.108.64/26 handle="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3524] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.65/26] handle="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" host="10.200.4.30" Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:00:43.790347 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.65/26] IPv6=[] ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" HandleID="k8s-pod-network.ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Workload="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.771 [INFO][3501] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"695e75fe-b9ea-4ba5-8430-db659e813a92", ResourceVersion:"1483", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-n8x4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f092009356", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.771 [INFO][3501] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.65/32] ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.771 [INFO][3501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f092009356 ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.779 [INFO][3501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.779 [INFO][3501] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"695e75fe-b9ea-4ba5-8430-db659e813a92", ResourceVersion:"1483", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584", Pod:"nginx-deployment-8587fbcb89-n8x4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f092009356", MAC:"82:79:d1:ff:24:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:43.792700 containerd[1736]: 2025-03-17 18:00:43.788 [INFO][3501] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584" Namespace="default" Pod="nginx-deployment-8587fbcb89-n8x4g" WorkloadEndpoint="10.200.4.30-k8s-nginx--deployment--8587fbcb89--n8x4g-eth0" Mar 17 18:00:43.819392 containerd[1736]: time="2025-03-17T18:00:43.819022282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:43.819392 containerd[1736]: time="2025-03-17T18:00:43.819109685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:43.819392 containerd[1736]: time="2025-03-17T18:00:43.819144585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:43.819392 containerd[1736]: time="2025-03-17T18:00:43.819249488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:43.845882 systemd[1]: Started cri-containerd-ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584.scope - libcontainer container ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584. Mar 17 18:00:43.889009 systemd-networkd[1521]: caliabaa724b3fe: Link UP Mar 17 18:00:43.889280 systemd-networkd[1521]: caliabaa724b3fe: Gained carrier Mar 17 18:00:43.899076 containerd[1736]: time="2025-03-17T18:00:43.899030803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n8x4g,Uid:695e75fe-b9ea-4ba5-8430-db659e813a92,Namespace:default,Attempt:4,} returns sandbox id \"ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584\"" Mar 17 18:00:43.901313 containerd[1736]: time="2025-03-17T18:00:43.901059252Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.687 [INFO][3511] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.698 [INFO][3511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.30-k8s-csi--node--driver--fk9nq-eth0 csi-node-driver- calico-system a9dadf48-23d5-4e7c-a2b0-a9c1e1747298 1387 0 2025-03-17 18:00:18 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.30 csi-node-driver-fk9nq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliabaa724b3fe [] []}} ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.698 [INFO][3511] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.733 [INFO][3526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" HandleID="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Workload="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.745 [INFO][3526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" HandleID="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Workload="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d700), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.30", "pod":"csi-node-driver-fk9nq", "timestamp":"2025-03-17 18:00:43.733916839 +0000 UTC"}, Hostname:"10.200.4.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.745 [INFO][3526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.769 [INFO][3526] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.30' Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.849 [INFO][3526] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.854 [INFO][3526] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.860 [INFO][3526] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.861 [INFO][3526] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.864 [INFO][3526] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.864 [INFO][3526] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.866 [INFO][3526] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273 Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.873 [INFO][3526] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.878 [INFO][3526] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.66/26] block=192.168.108.64/26 handle="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.878 [INFO][3526] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.66/26] handle="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" host="10.200.4.30" Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.878 [INFO][3526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:00:43.902869 containerd[1736]: 2025-03-17 18:00:43.878 [INFO][3526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.66/26] IPv6=[] ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" HandleID="k8s-pod-network.01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Workload="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.881 [INFO][3511] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-csi--node--driver--fk9nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"", Pod:"csi-node-driver-fk9nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabaa724b3fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.881 [INFO][3511] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.66/32] ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.881 [INFO][3511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabaa724b3fe ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.886 [INFO][3511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.886 [INFO][3511] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-csi--node--driver--fk9nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9dadf48-23d5-4e7c-a2b0-a9c1e1747298", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273", Pod:"csi-node-driver-fk9nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabaa724b3fe", MAC:"36:e5:59:c7:a2:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:43.904176 containerd[1736]: 2025-03-17 18:00:43.901 [INFO][3511] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273" Namespace="calico-system" Pod="csi-node-driver-fk9nq" WorkloadEndpoint="10.200.4.30-k8s-csi--node--driver--fk9nq-eth0" Mar 17 18:00:43.926084 containerd[1736]: time="2025-03-17T18:00:43.925852047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:43.926084 containerd[1736]: time="2025-03-17T18:00:43.925914649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:43.926084 containerd[1736]: time="2025-03-17T18:00:43.925936749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:43.926084 containerd[1736]: time="2025-03-17T18:00:43.926033451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:43.943906 systemd[1]: Started cri-containerd-01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273.scope - libcontainer container 01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273. Mar 17 18:00:43.964475 containerd[1736]: time="2025-03-17T18:00:43.964437173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fk9nq,Uid:a9dadf48-23d5-4e7c-a2b0-a9c1e1747298,Namespace:calico-system,Attempt:7,} returns sandbox id \"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273\"" Mar 17 18:00:44.381796 kubelet[2559]: E0317 18:00:44.381749 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:44.671748 kernel: bpftool[3759]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 18:00:44.974452 systemd-networkd[1521]: vxlan.calico: Link UP Mar 17 18:00:44.974465 systemd-networkd[1521]: vxlan.calico: Gained carrier Mar 17 18:00:45.012810 systemd-networkd[1521]: cali0f092009356: Gained IPv6LL Mar 17 18:00:45.383119 kubelet[2559]: E0317 18:00:45.383014 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:45.396951 systemd-networkd[1521]: caliabaa724b3fe: Gained IPv6LL Mar 17 18:00:46.228944 systemd-networkd[1521]: vxlan.calico: Gained IPv6LL Mar 17 18:00:46.383221 kubelet[2559]: E0317 18:00:46.383174 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:46.799329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009034054.mount: Deactivated successfully. Mar 17 18:00:47.384010 kubelet[2559]: E0317 18:00:47.383879 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:48.078228 containerd[1736]: time="2025-03-17T18:00:48.078170337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:48.080591 containerd[1736]: time="2025-03-17T18:00:48.080527190Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 18:00:48.085191 containerd[1736]: time="2025-03-17T18:00:48.085132993Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:48.090828 containerd[1736]: time="2025-03-17T18:00:48.090785619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:48.092380 containerd[1736]: time="2025-03-17T18:00:48.091664839Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 4.190570386s" Mar 17 18:00:48.092380 containerd[1736]: time="2025-03-17T18:00:48.091702540Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 18:00:48.093585 containerd[1736]: time="2025-03-17T18:00:48.093296076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 18:00:48.093984 containerd[1736]: time="2025-03-17T18:00:48.093958591Z" level=info msg="CreateContainer within sandbox \"ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:00:48.144340 containerd[1736]: time="2025-03-17T18:00:48.144294518Z" level=info msg="CreateContainer within sandbox \"ef9a0e31fdf9c932929392fca744f1460f248ea599f8b99d816d18822f2fc584\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a1bc351eb3097616797c9bf949e31185a39fd971d109980e3df6c9ea942e33c2\"" Mar 17 18:00:48.144973 containerd[1736]: time="2025-03-17T18:00:48.144939933Z" level=info msg="StartContainer for \"a1bc351eb3097616797c9bf949e31185a39fd971d109980e3df6c9ea942e33c2\"" Mar 17 18:00:48.182901 systemd[1]: Started cri-containerd-a1bc351eb3097616797c9bf949e31185a39fd971d109980e3df6c9ea942e33c2.scope - libcontainer container a1bc351eb3097616797c9bf949e31185a39fd971d109980e3df6c9ea942e33c2. Mar 17 18:00:48.211772 containerd[1736]: time="2025-03-17T18:00:48.211706529Z" level=info msg="StartContainer for \"a1bc351eb3097616797c9bf949e31185a39fd971d109980e3df6c9ea942e33c2\" returns successfully" Mar 17 18:00:48.384532 kubelet[2559]: E0317 18:00:48.384382 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:48.624904 kubelet[2559]: I0317 18:00:48.624841 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-n8x4g" podStartSLOduration=5.43251646 podStartE2EDuration="9.624823786s" podCreationTimestamp="2025-03-17 18:00:39 +0000 UTC" firstStartedPulling="2025-03-17 18:00:43.900428137 +0000 UTC m=+26.556597133" lastFinishedPulling="2025-03-17 18:00:48.092735363 +0000 UTC m=+30.748904459" observedRunningTime="2025-03-17 18:00:48.624755584 +0000 UTC m=+31.280924680" watchObservedRunningTime="2025-03-17 18:00:48.624823786 +0000 UTC m=+31.280992882" Mar 17 18:00:49.384877 kubelet[2559]: E0317 18:00:49.384812 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:49.818232 containerd[1736]: time="2025-03-17T18:00:49.818175626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:49.828150 containerd[1736]: time="2025-03-17T18:00:49.828089448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 18:00:49.831028 containerd[1736]: time="2025-03-17T18:00:49.830960912Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:49.836696 containerd[1736]: time="2025-03-17T18:00:49.836644940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:49.837392 containerd[1736]: time="2025-03-17T18:00:49.837360956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.74403018s" Mar 17 18:00:49.837479 containerd[1736]: time="2025-03-17T18:00:49.837398057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 18:00:49.839765 containerd[1736]: time="2025-03-17T18:00:49.839715609Z" level=info msg="CreateContainer within sandbox \"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:00:49.881250 containerd[1736]: time="2025-03-17T18:00:49.881201738Z" level=info msg="CreateContainer within sandbox \"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ac96a4f42f841774efb0d6cea5a2c6abfdc7adce5bbb6a6cb29d4b3ed3d6531\"" Mar 17 18:00:49.881887 containerd[1736]: time="2025-03-17T18:00:49.881845353Z" level=info msg="StartContainer for \"5ac96a4f42f841774efb0d6cea5a2c6abfdc7adce5bbb6a6cb29d4b3ed3d6531\"" Mar 17 18:00:49.916876 systemd[1]: Started cri-containerd-5ac96a4f42f841774efb0d6cea5a2c6abfdc7adce5bbb6a6cb29d4b3ed3d6531.scope - libcontainer container 5ac96a4f42f841774efb0d6cea5a2c6abfdc7adce5bbb6a6cb29d4b3ed3d6531. Mar 17 18:00:49.948386 containerd[1736]: time="2025-03-17T18:00:49.948340243Z" level=info msg="StartContainer for \"5ac96a4f42f841774efb0d6cea5a2c6abfdc7adce5bbb6a6cb29d4b3ed3d6531\" returns successfully" Mar 17 18:00:49.949557 containerd[1736]: time="2025-03-17T18:00:49.949456268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 18:00:50.385908 kubelet[2559]: E0317 18:00:50.385872 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:51.387051 kubelet[2559]: E0317 18:00:51.386978 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:52.250514 containerd[1736]: time="2025-03-17T18:00:52.250458727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:52.255042 containerd[1736]: time="2025-03-17T18:00:52.254976329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 18:00:52.258540 containerd[1736]: time="2025-03-17T18:00:52.258483307Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:52.263870 containerd[1736]: time="2025-03-17T18:00:52.263803026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:52.264542 containerd[1736]: time="2025-03-17T18:00:52.264376939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.314871571s" Mar 17 18:00:52.264542 containerd[1736]: time="2025-03-17T18:00:52.264416140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 18:00:52.266662 containerd[1736]: time="2025-03-17T18:00:52.266634490Z" level=info msg="CreateContainer within sandbox \"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:00:52.313658 containerd[1736]: time="2025-03-17T18:00:52.313606642Z" level=info msg="CreateContainer within sandbox \"01fde33e28286490a9e956645e5fe14ae17741d17c914e511dc0a69a09b3d273\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"254476a43dde56f9834c08d8c9ad5a7845091455ada4c715701e36ddc85b50c6\"" Mar 17 18:00:52.314192 containerd[1736]: time="2025-03-17T18:00:52.314165655Z" level=info msg="StartContainer for \"254476a43dde56f9834c08d8c9ad5a7845091455ada4c715701e36ddc85b50c6\"" Mar 17 18:00:52.354884 systemd[1]: Started cri-containerd-254476a43dde56f9834c08d8c9ad5a7845091455ada4c715701e36ddc85b50c6.scope - libcontainer container 254476a43dde56f9834c08d8c9ad5a7845091455ada4c715701e36ddc85b50c6. Mar 17 18:00:52.385759 containerd[1736]: time="2025-03-17T18:00:52.385530354Z" level=info msg="StartContainer for \"254476a43dde56f9834c08d8c9ad5a7845091455ada4c715701e36ddc85b50c6\" returns successfully" Mar 17 18:00:52.387435 kubelet[2559]: E0317 18:00:52.387383 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:52.494037 kubelet[2559]: I0317 18:00:52.493996 2559 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:00:52.494037 kubelet[2559]: I0317 18:00:52.494038 2559 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:00:52.646644 kubelet[2559]: I0317 18:00:52.646498 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fk9nq" podStartSLOduration=26.34751536 podStartE2EDuration="34.646478601s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="2025-03-17 18:00:43.96636612 +0000 UTC m=+26.622535116" lastFinishedPulling="2025-03-17 18:00:52.265329261 +0000 UTC m=+34.921498357" observedRunningTime="2025-03-17 18:00:52.646295497 +0000 UTC m=+35.302464593" watchObservedRunningTime="2025-03-17 18:00:52.646478601 +0000 UTC m=+35.302647697" Mar 17 18:00:53.387834 kubelet[2559]: E0317 18:00:53.387768 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:54.388855 kubelet[2559]: E0317 18:00:54.388793 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:55.389230 kubelet[2559]: E0317 18:00:55.389164 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:56.389629 kubelet[2559]: E0317 18:00:56.389564 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:57.390287 kubelet[2559]: E0317 18:00:57.390223 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:57.881774 systemd[1]: Created slice kubepods-besteffort-pod8d6ba476_4a00_4c19_aaaf_499a7df79d35.slice - libcontainer container kubepods-besteffort-pod8d6ba476_4a00_4c19_aaaf_499a7df79d35.slice. Mar 17 18:00:58.055338 kubelet[2559]: I0317 18:00:58.055282 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8d6ba476-4a00-4c19-aaaf-499a7df79d35-data\") pod \"nfs-server-provisioner-0\" (UID: \"8d6ba476-4a00-4c19-aaaf-499a7df79d35\") " pod="default/nfs-server-provisioner-0" Mar 17 18:00:58.055338 kubelet[2559]: I0317 18:00:58.055337 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n99\" (UniqueName: \"kubernetes.io/projected/8d6ba476-4a00-4c19-aaaf-499a7df79d35-kube-api-access-p5n99\") pod \"nfs-server-provisioner-0\" (UID: \"8d6ba476-4a00-4c19-aaaf-499a7df79d35\") " pod="default/nfs-server-provisioner-0" Mar 17 18:00:58.184623 containerd[1736]: time="2025-03-17T18:00:58.184484533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8d6ba476-4a00-4c19-aaaf-499a7df79d35,Namespace:default,Attempt:0,}" Mar 17 18:00:58.328021 systemd-networkd[1521]: cali60e51b789ff: Link UP Mar 17 18:00:58.328318 systemd-networkd[1521]: cali60e51b789ff: Gained carrier Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.261 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.30-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 8d6ba476-4a00-4c19-aaaf-499a7df79d35 1587 0 2025-03-17 18:00:57 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.30 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.261 [INFO][4038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.284 [INFO][4049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" HandleID="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Workload="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.294 [INFO][4049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" HandleID="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Workload="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f30), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.30", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 18:00:58.284933511 +0000 UTC"}, Hostname:"10.200.4.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.294 [INFO][4049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.294 [INFO][4049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.294 [INFO][4049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.30' Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.297 [INFO][4049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.301 [INFO][4049] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.304 [INFO][4049] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.305 [INFO][4049] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.308 [INFO][4049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.308 [INFO][4049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.309 [INFO][4049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96 Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.314 [INFO][4049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.322 [INFO][4049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.67/26] block=192.168.108.64/26 handle="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.322 [INFO][4049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.67/26] handle="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" host="10.200.4.30" Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.322 [INFO][4049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:00:58.338486 containerd[1736]: 2025-03-17 18:00:58.322 [INFO][4049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.67/26] IPv6=[] ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" HandleID="k8s-pod-network.481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Workload="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.341189 containerd[1736]: 2025-03-17 18:00:58.324 [INFO][4038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8d6ba476-4a00-4c19-aaaf-499a7df79d35", ResourceVersion:"1587", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:58.341189 containerd[1736]: 2025-03-17 18:00:58.324 [INFO][4038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.67/32] ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.341189 containerd[1736]: 2025-03-17 18:00:58.324 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.341189 containerd[1736]: 2025-03-17 18:00:58.326 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.341486 containerd[1736]: 2025-03-17 18:00:58.327 [INFO][4038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8d6ba476-4a00-4c19-aaaf-499a7df79d35", ResourceVersion:"1587", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"2a:7b:f0:e3:35:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:00:58.341486 containerd[1736]: 2025-03-17 18:00:58.337 [INFO][4038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.30-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:00:58.361856 kubelet[2559]: E0317 18:00:58.361800 2559 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:58.366703 containerd[1736]: time="2025-03-17T18:00:58.366604764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:58.366703 containerd[1736]: time="2025-03-17T18:00:58.366660565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:58.366703 containerd[1736]: time="2025-03-17T18:00:58.366675166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:58.367147 containerd[1736]: time="2025-03-17T18:00:58.366860170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:58.390456 kubelet[2559]: E0317 18:00:58.390398 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:00:58.395902 systemd[1]: Started cri-containerd-481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96.scope - libcontainer container 481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96. Mar 17 18:00:58.434133 containerd[1736]: time="2025-03-17T18:00:58.434086595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8d6ba476-4a00-4c19-aaaf-499a7df79d35,Namespace:default,Attempt:0,} returns sandbox id \"481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96\"" Mar 17 18:00:58.437282 containerd[1736]: time="2025-03-17T18:00:58.437095563Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:00:59.169352 systemd[1]: run-containerd-runc-k8s.io-481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96-runc.uk6gLo.mount: Deactivated successfully. Mar 17 18:00:59.390769 kubelet[2559]: E0317 18:00:59.390705 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:00.309451 systemd-networkd[1521]: cali60e51b789ff: Gained IPv6LL Mar 17 18:01:00.391929 kubelet[2559]: E0317 18:01:00.391449 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:01.010488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933817751.mount: Deactivated successfully. Mar 17 18:01:01.392948 kubelet[2559]: E0317 18:01:01.392677 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:02.393105 kubelet[2559]: E0317 18:01:02.393060 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:03.393210 kubelet[2559]: E0317 18:01:03.393157 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:03.470669 containerd[1736]: time="2025-03-17T18:01:03.470610396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:01:03.473817 containerd[1736]: time="2025-03-17T18:01:03.473745266Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Mar 17 18:01:03.484000 containerd[1736]: time="2025-03-17T18:01:03.483942894Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:01:03.493551 containerd[1736]: time="2025-03-17T18:01:03.493060897Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.055923433s" Mar 17 18:01:03.493551 containerd[1736]: time="2025-03-17T18:01:03.493101698Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 18:01:03.494732 containerd[1736]: time="2025-03-17T18:01:03.493812614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:01:03.495688 containerd[1736]: time="2025-03-17T18:01:03.495660756Z" level=info msg="CreateContainer within sandbox \"481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:01:03.541410 containerd[1736]: time="2025-03-17T18:01:03.541369877Z" level=info msg="CreateContainer within sandbox \"481c13f5e18abbfd535c6b29639a37e8de3cbf237d37f6cf9533ebf32d700e96\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9600f998e252f95c298a7dd72bc053bb1f02564dc977cd771a5063cbcdf1e59c\"" Mar 17 18:01:03.542011 containerd[1736]: time="2025-03-17T18:01:03.541939690Z" level=info msg="StartContainer for \"9600f998e252f95c298a7dd72bc053bb1f02564dc977cd771a5063cbcdf1e59c\"" Mar 17 18:01:03.575883 systemd[1]: Started cri-containerd-9600f998e252f95c298a7dd72bc053bb1f02564dc977cd771a5063cbcdf1e59c.scope - libcontainer container 9600f998e252f95c298a7dd72bc053bb1f02564dc977cd771a5063cbcdf1e59c. Mar 17 18:01:03.605737 containerd[1736]: time="2025-03-17T18:01:03.605680614Z" level=info msg="StartContainer for \"9600f998e252f95c298a7dd72bc053bb1f02564dc977cd771a5063cbcdf1e59c\" returns successfully" Mar 17 18:01:03.675376 kubelet[2559]: I0317 18:01:03.674783 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.617288989 podStartE2EDuration="6.674762758s" podCreationTimestamp="2025-03-17 18:00:57 +0000 UTC" firstStartedPulling="2025-03-17 18:00:58.43653525 +0000 UTC m=+41.092704246" lastFinishedPulling="2025-03-17 18:01:03.494009019 +0000 UTC m=+46.150178015" observedRunningTime="2025-03-17 18:01:03.674637755 +0000 UTC m=+46.330806851" watchObservedRunningTime="2025-03-17 18:01:03.674762758 +0000 UTC m=+46.330931754" Mar 17 18:01:04.393965 kubelet[2559]: E0317 18:01:04.393898 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:05.394078 kubelet[2559]: E0317 18:01:05.394040 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:06.394572 kubelet[2559]: E0317 18:01:06.394501 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:07.394963 kubelet[2559]: E0317 18:01:07.394896 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:08.395663 kubelet[2559]: E0317 18:01:08.395621 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:09.396676 kubelet[2559]: E0317 18:01:09.396610 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:10.397268 kubelet[2559]: E0317 18:01:10.397215 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:11.397665 kubelet[2559]: E0317 18:01:11.397602 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:12.398684 kubelet[2559]: E0317 18:01:12.398627 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:13.398885 kubelet[2559]: E0317 18:01:13.398826 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:14.399834 kubelet[2559]: E0317 18:01:14.399773 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:15.400755 kubelet[2559]: E0317 18:01:15.400687 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:16.401149 kubelet[2559]: E0317 18:01:16.401082 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:17.402011 kubelet[2559]: E0317 18:01:17.401943 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:18.360092 kubelet[2559]: E0317 18:01:18.360025 2559 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:18.402169 kubelet[2559]: E0317 18:01:18.402126 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:18.416605 containerd[1736]: time="2025-03-17T18:01:18.416561487Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:01:18.417476 containerd[1736]: time="2025-03-17T18:01:18.416954995Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:01:18.417476 containerd[1736]: time="2025-03-17T18:01:18.416979696Z" level=info msg="StopPodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:01:18.418816 containerd[1736]: time="2025-03-17T18:01:18.418056620Z" level=info msg="RemovePodSandbox for \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:01:18.418816 containerd[1736]: time="2025-03-17T18:01:18.418097521Z" level=info msg="Forcibly stopping sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\"" Mar 17 18:01:18.418816 containerd[1736]: time="2025-03-17T18:01:18.418192123Z" level=info msg="TearDown network for sandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" successfully" Mar 17 18:01:18.425881 containerd[1736]: time="2025-03-17T18:01:18.425834792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.425976 containerd[1736]: time="2025-03-17T18:01:18.425884893Z" level=info msg="RemovePodSandbox \"759cd32cb92e9a94eb48f020052d839ef954b4c9d9c037060e06cb074bbe34a6\" returns successfully" Mar 17 18:01:18.426248 containerd[1736]: time="2025-03-17T18:01:18.426214501Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:01:18.426334 containerd[1736]: time="2025-03-17T18:01:18.426300502Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:01:18.426334 containerd[1736]: time="2025-03-17T18:01:18.426315003Z" level=info msg="StopPodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:01:18.426664 containerd[1736]: time="2025-03-17T18:01:18.426632210Z" level=info msg="RemovePodSandbox for \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:01:18.426664 containerd[1736]: time="2025-03-17T18:01:18.426657210Z" level=info msg="Forcibly stopping sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\"" Mar 17 18:01:18.426804 containerd[1736]: time="2025-03-17T18:01:18.426750312Z" level=info msg="TearDown network for sandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" successfully" Mar 17 18:01:18.434903 containerd[1736]: time="2025-03-17T18:01:18.434869692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.434996 containerd[1736]: time="2025-03-17T18:01:18.434950294Z" level=info msg="RemovePodSandbox \"3f491b9765f6d3c2144e147546f94843cd97aebe5827df2e84a62e4c2afedc34\" returns successfully" Mar 17 18:01:18.435806 containerd[1736]: time="2025-03-17T18:01:18.435623009Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:01:18.435806 containerd[1736]: time="2025-03-17T18:01:18.435715211Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:01:18.435806 containerd[1736]: time="2025-03-17T18:01:18.435745612Z" level=info msg="StopPodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:01:18.436188 containerd[1736]: time="2025-03-17T18:01:18.436166121Z" level=info msg="RemovePodSandbox for \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:01:18.436513 containerd[1736]: time="2025-03-17T18:01:18.436303924Z" level=info msg="Forcibly stopping sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\"" Mar 17 18:01:18.436513 containerd[1736]: time="2025-03-17T18:01:18.436384726Z" level=info msg="TearDown network for sandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" successfully" Mar 17 18:01:18.444096 containerd[1736]: time="2025-03-17T18:01:18.444060496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.444183 containerd[1736]: time="2025-03-17T18:01:18.444103197Z" level=info msg="RemovePodSandbox \"0e67e92f147f9a0f276d21ad44b075a96af0d53d38c9b0f8d70dbbcaecc80b8a\" returns successfully" Mar 17 18:01:18.444459 containerd[1736]: time="2025-03-17T18:01:18.444432104Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:01:18.444559 containerd[1736]: time="2025-03-17T18:01:18.444539906Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:01:18.444615 containerd[1736]: time="2025-03-17T18:01:18.444556207Z" level=info msg="StopPodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:01:18.444931 containerd[1736]: time="2025-03-17T18:01:18.444900314Z" level=info msg="RemovePodSandbox for \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:01:18.444931 containerd[1736]: time="2025-03-17T18:01:18.444924715Z" level=info msg="Forcibly stopping sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\"" Mar 17 18:01:18.445054 containerd[1736]: time="2025-03-17T18:01:18.444996016Z" level=info msg="TearDown network for sandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" successfully" Mar 17 18:01:18.453760 containerd[1736]: time="2025-03-17T18:01:18.453702509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.453952 containerd[1736]: time="2025-03-17T18:01:18.453761711Z" level=info msg="RemovePodSandbox \"780762b1fdcbe752443d43513b95eeb01eb6c68f2bf543946cbaf6d05b98d036\" returns successfully" Mar 17 18:01:18.454167 containerd[1736]: time="2025-03-17T18:01:18.454137519Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:01:18.454274 containerd[1736]: time="2025-03-17T18:01:18.454227821Z" level=info msg="TearDown network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" successfully" Mar 17 18:01:18.454274 containerd[1736]: time="2025-03-17T18:01:18.454242721Z" level=info msg="StopPodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" returns successfully" Mar 17 18:01:18.454574 containerd[1736]: time="2025-03-17T18:01:18.454514227Z" level=info msg="RemovePodSandbox for \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:01:18.454574 containerd[1736]: time="2025-03-17T18:01:18.454545028Z" level=info msg="Forcibly stopping sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\"" Mar 17 18:01:18.454774 containerd[1736]: time="2025-03-17T18:01:18.454636030Z" level=info msg="TearDown network for sandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" successfully" Mar 17 18:01:18.462279 containerd[1736]: time="2025-03-17T18:01:18.462250199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.462489 containerd[1736]: time="2025-03-17T18:01:18.462289699Z" level=info msg="RemovePodSandbox \"c621bab6b4b363a596198422fdf7a0059cb35a3347bbfc9ec6fdd3d315089cdc\" returns successfully" Mar 17 18:01:18.462610 containerd[1736]: time="2025-03-17T18:01:18.462580706Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" Mar 17 18:01:18.462701 containerd[1736]: time="2025-03-17T18:01:18.462666208Z" level=info msg="TearDown network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" successfully" Mar 17 18:01:18.462701 containerd[1736]: time="2025-03-17T18:01:18.462684008Z" level=info msg="StopPodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" returns successfully" Mar 17 18:01:18.463003 containerd[1736]: time="2025-03-17T18:01:18.462969815Z" level=info msg="RemovePodSandbox for \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" Mar 17 18:01:18.463075 containerd[1736]: time="2025-03-17T18:01:18.463009315Z" level=info msg="Forcibly stopping sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\"" Mar 17 18:01:18.463121 containerd[1736]: time="2025-03-17T18:01:18.463077117Z" level=info msg="TearDown network for sandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" successfully" Mar 17 18:01:18.470631 containerd[1736]: time="2025-03-17T18:01:18.470604876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.470762 containerd[1736]: time="2025-03-17T18:01:18.470641677Z" level=info msg="RemovePodSandbox \"9505f76310166ea1f268db316eda4dc5e0c271386f74ff6a5dca0635500aec8f\" returns successfully" Mar 17 18:01:18.470990 containerd[1736]: time="2025-03-17T18:01:18.470938983Z" level=info msg="StopPodSandbox for \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\"" Mar 17 18:01:18.471059 containerd[1736]: time="2025-03-17T18:01:18.471027585Z" level=info msg="TearDown network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" successfully" Mar 17 18:01:18.471059 containerd[1736]: time="2025-03-17T18:01:18.471041785Z" level=info msg="StopPodSandbox for \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" returns successfully" Mar 17 18:01:18.471307 containerd[1736]: time="2025-03-17T18:01:18.471262190Z" level=info msg="RemovePodSandbox for \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\"" Mar 17 18:01:18.471307 containerd[1736]: time="2025-03-17T18:01:18.471290291Z" level=info msg="Forcibly stopping sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\"" Mar 17 18:01:18.471435 containerd[1736]: time="2025-03-17T18:01:18.471361592Z" level=info msg="TearDown network for sandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" successfully" Mar 17 18:01:18.481135 containerd[1736]: time="2025-03-17T18:01:18.481096197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.481335 containerd[1736]: time="2025-03-17T18:01:18.481139998Z" level=info msg="RemovePodSandbox \"d7624d4406d378f4136e34d681d2c157bc032193aefc4300c253b20de57796b4\" returns successfully" Mar 17 18:01:18.481468 containerd[1736]: time="2025-03-17T18:01:18.481445304Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:01:18.481565 containerd[1736]: time="2025-03-17T18:01:18.481544506Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:01:18.481634 containerd[1736]: time="2025-03-17T18:01:18.481564306Z" level=info msg="StopPodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:01:18.481887 containerd[1736]: time="2025-03-17T18:01:18.481865013Z" level=info msg="RemovePodSandbox for \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:01:18.481991 containerd[1736]: time="2025-03-17T18:01:18.481967615Z" level=info msg="Forcibly stopping sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\"" Mar 17 18:01:18.482103 containerd[1736]: time="2025-03-17T18:01:18.482061117Z" level=info msg="TearDown network for sandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" successfully" Mar 17 18:01:18.489851 containerd[1736]: time="2025-03-17T18:01:18.489822580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.489952 containerd[1736]: time="2025-03-17T18:01:18.489861881Z" level=info msg="RemovePodSandbox \"1ea9965a1bdbd2d162bb34fbb0e1d6d3fb224c7858f4ccd3486ed52edba2d143\" returns successfully" Mar 17 18:01:18.490252 containerd[1736]: time="2025-03-17T18:01:18.490163187Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:01:18.490348 containerd[1736]: time="2025-03-17T18:01:18.490260189Z" level=info msg="TearDown network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" successfully" Mar 17 18:01:18.490348 containerd[1736]: time="2025-03-17T18:01:18.490275590Z" level=info msg="StopPodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" returns successfully" Mar 17 18:01:18.490627 containerd[1736]: time="2025-03-17T18:01:18.490578096Z" level=info msg="RemovePodSandbox for \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:01:18.490627 containerd[1736]: time="2025-03-17T18:01:18.490605896Z" level=info msg="Forcibly stopping sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\"" Mar 17 18:01:18.490801 containerd[1736]: time="2025-03-17T18:01:18.490677198Z" level=info msg="TearDown network for sandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" successfully" Mar 17 18:01:18.497251 containerd[1736]: time="2025-03-17T18:01:18.497212735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.497484 containerd[1736]: time="2025-03-17T18:01:18.497258436Z" level=info msg="RemovePodSandbox \"c101c76214c9fa1b409a484aa1b4e96cbac8957fc5b20cde636df3f798f3b2bf\" returns successfully" Mar 17 18:01:18.497681 containerd[1736]: time="2025-03-17T18:01:18.497610044Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" Mar 17 18:01:18.497768 containerd[1736]: time="2025-03-17T18:01:18.497707346Z" level=info msg="TearDown network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" successfully" Mar 17 18:01:18.497768 containerd[1736]: time="2025-03-17T18:01:18.497740646Z" level=info msg="StopPodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" returns successfully" Mar 17 18:01:18.498067 containerd[1736]: time="2025-03-17T18:01:18.498014052Z" level=info msg="RemovePodSandbox for \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" Mar 17 18:01:18.498067 containerd[1736]: time="2025-03-17T18:01:18.498042253Z" level=info msg="Forcibly stopping sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\"" Mar 17 18:01:18.498193 containerd[1736]: time="2025-03-17T18:01:18.498119154Z" level=info msg="TearDown network for sandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" successfully" Mar 17 18:01:18.505452 containerd[1736]: time="2025-03-17T18:01:18.505423608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.505544 containerd[1736]: time="2025-03-17T18:01:18.505462509Z" level=info msg="RemovePodSandbox \"970fd5e1a2ed69f367fb7756ed229295a48c1348cf7149e4d15532b13b35151c\" returns successfully" Mar 17 18:01:18.505871 containerd[1736]: time="2025-03-17T18:01:18.505832216Z" level=info msg="StopPodSandbox for \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\"" Mar 17 18:01:18.505962 containerd[1736]: time="2025-03-17T18:01:18.505938219Z" level=info msg="TearDown network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" successfully" Mar 17 18:01:18.505962 containerd[1736]: time="2025-03-17T18:01:18.505955219Z" level=info msg="StopPodSandbox for \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" returns successfully" Mar 17 18:01:18.506320 containerd[1736]: time="2025-03-17T18:01:18.506291626Z" level=info msg="RemovePodSandbox for \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\"" Mar 17 18:01:18.506320 containerd[1736]: time="2025-03-17T18:01:18.506316027Z" level=info msg="Forcibly stopping sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\"" Mar 17 18:01:18.506473 containerd[1736]: time="2025-03-17T18:01:18.506386228Z" level=info msg="TearDown network for sandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" successfully" Mar 17 18:01:18.515999 containerd[1736]: time="2025-03-17T18:01:18.515968129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:01:18.516099 containerd[1736]: time="2025-03-17T18:01:18.516014330Z" level=info msg="RemovePodSandbox \"0fa90e6347d6a1572f58efc407b6a75df4ff6610007c61e4be93cc6f1d2e15ad\" returns successfully" Mar 17 18:01:19.402862 kubelet[2559]: E0317 18:01:19.402798 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:20.403248 kubelet[2559]: E0317 18:01:20.403185 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:21.403389 kubelet[2559]: E0317 18:01:21.403326 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:22.404555 kubelet[2559]: E0317 18:01:22.404488 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:23.404803 kubelet[2559]: E0317 18:01:23.404714 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:24.405397 kubelet[2559]: E0317 18:01:24.405328 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:25.406155 kubelet[2559]: E0317 18:01:25.406106 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:26.406363 kubelet[2559]: E0317 18:01:26.406297 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:27.406903 kubelet[2559]: E0317 18:01:27.406838 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:28.407293 kubelet[2559]: E0317 18:01:28.407231 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:28.698624 systemd[1]: Created slice kubepods-besteffort-poddf132e43_99b0_447a_9095_13ad166e7289.slice - libcontainer container kubepods-besteffort-poddf132e43_99b0_447a_9095_13ad166e7289.slice. Mar 17 18:01:28.833122 kubelet[2559]: I0317 18:01:28.833057 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjbrw\" (UniqueName: \"kubernetes.io/projected/df132e43-99b0-447a-9095-13ad166e7289-kube-api-access-kjbrw\") pod \"test-pod-1\" (UID: \"df132e43-99b0-447a-9095-13ad166e7289\") " pod="default/test-pod-1" Mar 17 18:01:28.833122 kubelet[2559]: I0317 18:01:28.833122 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c746ddcd-2c1b-4196-bd4d-82c727973f3b\" (UniqueName: \"kubernetes.io/nfs/df132e43-99b0-447a-9095-13ad166e7289-pvc-c746ddcd-2c1b-4196-bd4d-82c727973f3b\") pod \"test-pod-1\" (UID: \"df132e43-99b0-447a-9095-13ad166e7289\") " pod="default/test-pod-1" Mar 17 18:01:29.052750 kernel: FS-Cache: Loaded Mar 17 18:01:29.071995 waagent[1941]: 2025-03-17T18:01:29.071922Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:01:29.081383 waagent[1941]: 2025-03-17T18:01:29.081315Z INFO ExtHandler Mar 17 18:01:29.081524 waagent[1941]: 2025-03-17T18:01:29.081449Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3c2ad494-60e4-4444-a996-0818806d7007 eTag: 9059593102173392291 source: Fabric] Mar 17 18:01:29.081917 waagent[1941]: 2025-03-17T18:01:29.081861Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:01:29.082506 waagent[1941]: 2025-03-17T18:01:29.082447Z INFO ExtHandler Mar 17 18:01:29.082594 waagent[1941]: 2025-03-17T18:01:29.082536Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:01:29.141753 waagent[1941]: 2025-03-17T18:01:29.140896Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:01:29.208613 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:01:29.208735 kernel: RPC: Registered udp transport module. Mar 17 18:01:29.208756 kernel: RPC: Registered tcp transport module. Mar 17 18:01:29.211488 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 18:01:29.211551 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:01:29.215864 waagent[1941]: 2025-03-17T18:01:29.215780Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EEAD80345617B3827A146A3C8D0A04E78DE1B06F', 'hasPrivateKey': True} Mar 17 18:01:29.216404 waagent[1941]: 2025-03-17T18:01:29.216348Z INFO ExtHandler Fetch goal state completed Mar 17 18:01:29.216864 waagent[1941]: 2025-03-17T18:01:29.216813Z INFO ExtHandler ExtHandler Mar 17 18:01:29.216956 waagent[1941]: 2025-03-17T18:01:29.216915Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: e0414f30-5720-4724-acde-78bdd3bb7a29 correlation b71533df-eeed-45a7-9b02-fa41d17ed4fd created: 2025-03-17T18:01:21.877251Z] Mar 17 18:01:29.217316 waagent[1941]: 2025-03-17T18:01:29.217269Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:01:29.217820 waagent[1941]: 2025-03-17T18:01:29.217777Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Mar 17 18:01:29.408064 kubelet[2559]: E0317 18:01:29.407940 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:29.608083 kernel: NFS: Registering the id_resolver key type Mar 17 18:01:29.608201 kernel: Key type id_resolver registered Mar 17 18:01:29.608238 kernel: Key type id_legacy registered Mar 17 18:01:29.718097 nfsidmap[4278]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-2af36eae3a' Mar 17 18:01:29.754984 nfsidmap[4279]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-2af36eae3a' Mar 17 18:01:29.902053 containerd[1736]: time="2025-03-17T18:01:29.901994356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df132e43-99b0-447a-9095-13ad166e7289,Namespace:default,Attempt:0,}" Mar 17 18:01:30.041943 systemd-networkd[1521]: cali5ec59c6bf6e: Link UP Mar 17 18:01:30.042192 systemd-networkd[1521]: cali5ec59c6bf6e: Gained carrier Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:29.979 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.30-k8s-test--pod--1-eth0 default df132e43-99b0-447a-9095-13ad166e7289 1686 0 2025-03-17 18:00:59 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.30 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:29.979 [INFO][4280] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.002 [INFO][4292] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" HandleID="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Workload="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.012 [INFO][4292] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" HandleID="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Workload="10.200.4.30-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002932e0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.30", "pod":"test-pod-1", "timestamp":"2025-03-17 18:01:30.002773397 +0000 UTC"}, Hostname:"10.200.4.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.012 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.012 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.012 [INFO][4292] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.30' Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.014 [INFO][4292] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.017 [INFO][4292] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.021 [INFO][4292] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.022 [INFO][4292] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.024 [INFO][4292] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.024 [INFO][4292] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.026 [INFO][4292] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623 Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.030 [INFO][4292] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.037 [INFO][4292] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.68/26] block=192.168.108.64/26 handle="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.037 [INFO][4292] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.68/26] handle="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" host="10.200.4.30" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.037 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.037 [INFO][4292] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.68/26] IPv6=[] ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" HandleID="k8s-pod-network.a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Workload="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.054077 containerd[1736]: 2025-03-17 18:01:30.038 [INFO][4280] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"df132e43-99b0-447a-9095-13ad166e7289", ResourceVersion:"1686", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 59, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:01:30.056281 containerd[1736]: 2025-03-17 18:01:30.038 [INFO][4280] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.68/32] ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.056281 containerd[1736]: 2025-03-17 18:01:30.038 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.056281 containerd[1736]: 2025-03-17 18:01:30.042 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.056281 containerd[1736]: 2025-03-17 18:01:30.043 [INFO][4280] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.30-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"df132e43-99b0-447a-9095-13ad166e7289", ResourceVersion:"1686", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 0, 59, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.30", ContainerID:"a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"46:52:92:4a:85:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:01:30.056281 containerd[1736]: 2025-03-17 18:01:30.052 [INFO][4280] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.30-k8s-test--pod--1-eth0" Mar 17 18:01:30.084455 containerd[1736]: time="2025-03-17T18:01:30.084337010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:01:30.084455 containerd[1736]: time="2025-03-17T18:01:30.084395111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:01:30.084455 containerd[1736]: time="2025-03-17T18:01:30.084411111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:30.084867 containerd[1736]: time="2025-03-17T18:01:30.084500813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:30.109933 systemd[1]: Started cri-containerd-a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623.scope - libcontainer container a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623. Mar 17 18:01:30.149982 containerd[1736]: time="2025-03-17T18:01:30.149943868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df132e43-99b0-447a-9095-13ad166e7289,Namespace:default,Attempt:0,} returns sandbox id \"a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623\"" Mar 17 18:01:30.153353 containerd[1736]: time="2025-03-17T18:01:30.153215741Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:01:30.408911 kubelet[2559]: E0317 18:01:30.408749 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:30.616192 containerd[1736]: time="2025-03-17T18:01:30.616133732Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:01:30.618883 containerd[1736]: time="2025-03-17T18:01:30.618818892Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 18:01:30.621375 containerd[1736]: time="2025-03-17T18:01:30.621336448Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 468.063906ms" Mar 17 18:01:30.621375 containerd[1736]: time="2025-03-17T18:01:30.621369449Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 18:01:30.623465 containerd[1736]: time="2025-03-17T18:01:30.623427694Z" level=info msg="CreateContainer within sandbox \"a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:01:30.661670 containerd[1736]: time="2025-03-17T18:01:30.661551642Z" level=info msg="CreateContainer within sandbox \"a00e437342da0317e44c51169240063fe63ef8c014267bf8b050e9ffbf899623\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5a0da201ce8b2ec2f74c7f333e66602097807718be86345090f6b87eeb805a12\"" Mar 17 18:01:30.662484 containerd[1736]: time="2025-03-17T18:01:30.662449862Z" level=info msg="StartContainer for \"5a0da201ce8b2ec2f74c7f333e66602097807718be86345090f6b87eeb805a12\"" Mar 17 18:01:30.689904 systemd[1]: Started cri-containerd-5a0da201ce8b2ec2f74c7f333e66602097807718be86345090f6b87eeb805a12.scope - libcontainer container 5a0da201ce8b2ec2f74c7f333e66602097807718be86345090f6b87eeb805a12. Mar 17 18:01:30.718265 containerd[1736]: time="2025-03-17T18:01:30.718224702Z" level=info msg="StartContainer for \"5a0da201ce8b2ec2f74c7f333e66602097807718be86345090f6b87eeb805a12\" returns successfully" Mar 17 18:01:31.349019 systemd-networkd[1521]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 18:01:31.409833 kubelet[2559]: E0317 18:01:31.409768 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:31.733683 kubelet[2559]: I0317 18:01:31.733522 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.263843531 podStartE2EDuration="32.733502572s" podCreationTimestamp="2025-03-17 18:00:59 +0000 UTC" firstStartedPulling="2025-03-17 18:01:30.152458024 +0000 UTC m=+72.808627120" lastFinishedPulling="2025-03-17 18:01:30.622117165 +0000 UTC m=+73.278286161" observedRunningTime="2025-03-17 18:01:31.733249667 +0000 UTC m=+74.389418763" watchObservedRunningTime="2025-03-17 18:01:31.733502572 +0000 UTC m=+74.389671668" Mar 17 18:01:32.410642 kubelet[2559]: E0317 18:01:32.410579 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:33.411038 kubelet[2559]: E0317 18:01:33.410973 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:34.411471 kubelet[2559]: E0317 18:01:34.411408 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:35.412313 kubelet[2559]: E0317 18:01:35.412240 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:36.412568 kubelet[2559]: E0317 18:01:36.412499 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:37.413618 kubelet[2559]: E0317 18:01:37.413564 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:38.360449 kubelet[2559]: E0317 18:01:38.360385 2559 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:38.414356 kubelet[2559]: E0317 18:01:38.414295 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:39.415078 kubelet[2559]: E0317 18:01:39.415020 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:40.416065 kubelet[2559]: E0317 18:01:40.416004 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:41.416936 kubelet[2559]: E0317 18:01:41.416874 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:42.417819 kubelet[2559]: E0317 18:01:42.417763 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:01:43.418897 kubelet[2559]: E0317 18:01:43.418836 2559 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"