Mar 17 17:56:25.071913 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:56:25.071952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.071967 kernel: BIOS-provided physical RAM map: Mar 17 17:56:25.071978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:56:25.071988 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 17 17:56:25.071998 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 17 17:56:25.072011 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 17 17:56:25.072025 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 17 17:56:25.072035 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 17 17:56:25.072046 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 17 17:56:25.072057 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 17 17:56:25.072068 kernel: printk: bootconsole [earlyser0] enabled Mar 17 17:56:25.072078 kernel: NX (Execute Disable) protection: active Mar 17 17:56:25.072090 kernel: APIC: Static calls initialized Mar 17 17:56:25.072121 kernel: efi: EFI v2.7 by Microsoft Mar 17 17:56:25.072133 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Mar 17 17:56:25.072146 kernel: random: crng init done Mar 17 17:56:25.072158 kernel: secureboot: Secure boot disabled Mar 17 17:56:25.072170 kernel: SMBIOS 3.1.0 present. Mar 17 17:56:25.072182 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 17 17:56:25.072194 kernel: Hypervisor detected: Microsoft Hyper-V Mar 17 17:56:25.072207 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 17 17:56:25.072219 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Mar 17 17:56:25.072231 kernel: Hyper-V: Nested features: 0x1e0101 Mar 17 17:56:25.072245 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 17 17:56:25.072257 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 17 17:56:25.072269 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:56:25.072281 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:56:25.072294 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 17 17:56:25.072306 kernel: tsc: Detected 2593.907 MHz processor Mar 17 17:56:25.072318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:56:25.072331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:56:25.072343 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 17 17:56:25.072357 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:56:25.072370 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:56:25.072382 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 17 17:56:25.072394 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 17 17:56:25.072406 kernel: Using GB pages for direct mapping Mar 17 17:56:25.072419 kernel: ACPI: Early table checksum verification disabled Mar 17 17:56:25.072431 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 17 17:56:25.072448 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072463 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072476 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 17 17:56:25.072489 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 17 17:56:25.072502 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072516 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072529 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072544 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072557 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072570 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072583 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072596 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 17 17:56:25.072609 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 17 17:56:25.072622 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 17 17:56:25.072635 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 17 17:56:25.072646 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 17 17:56:25.072660 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 17 17:56:25.072673 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 17 17:56:25.072686 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 17 17:56:25.072699 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 17 17:56:25.072710 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 17 17:56:25.072737 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:56:25.072750 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:56:25.072762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 17 17:56:25.072774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 17 17:56:25.072791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 17 17:56:25.072805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 17 17:56:25.072816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 17 17:56:25.072833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 17 17:56:25.072845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 17 17:56:25.072858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 17 17:56:25.072871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 17 17:56:25.072885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 17 17:56:25.072902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 17 17:56:25.072916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 17 17:56:25.072930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 17 17:56:25.072944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 17 17:56:25.072958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 17 17:56:25.072972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 17 17:56:25.072986 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 17 17:56:25.073000 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 17 17:56:25.073014 kernel: Zone ranges: Mar 17 17:56:25.073031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:56:25.073045 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 17:56:25.073058 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:56:25.073071 kernel: Movable zone start for each node Mar 17 17:56:25.073084 kernel: Early memory node ranges Mar 17 17:56:25.079382 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:56:25.079408 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 17 17:56:25.079423 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 17 17:56:25.079437 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:56:25.079455 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 17 17:56:25.079469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:56:25.079484 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:56:25.079498 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 17 17:56:25.079509 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 17 17:56:25.079524 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 17 17:56:25.079537 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:56:25.079551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:56:25.079564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:56:25.079583 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 17 17:56:25.079595 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:56:25.079609 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 17 17:56:25.079624 kernel: Booting paravirtualized kernel on Hyper-V Mar 17 17:56:25.079637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:56:25.079650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:56:25.079662 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:56:25.079674 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:56:25.079685 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:56:25.079703 kernel: Hyper-V: PV spinlocks enabled Mar 17 17:56:25.079717 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:56:25.079731 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.079746 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:56:25.079760 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 17:56:25.079774 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:56:25.079786 kernel: Fallback order for Node 0: 0 Mar 17 17:56:25.079797 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 17 17:56:25.079813 kernel: Policy zone: Normal Mar 17 17:56:25.079838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:56:25.079850 kernel: software IO TLB: area num 2. Mar 17 17:56:25.079866 kernel: Memory: 8077032K/8387460K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 310172K reserved, 0K cma-reserved) Mar 17 17:56:25.079880 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:56:25.079894 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:56:25.079908 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:56:25.079922 kernel: Dynamic Preempt: voluntary Mar 17 17:56:25.079936 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:56:25.079950 kernel: rcu: RCU event tracing is enabled. Mar 17 17:56:25.079964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:56:25.079982 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:56:25.079996 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:56:25.080009 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:56:25.080023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:56:25.080037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:56:25.080092 kernel: Using NULL legacy PIC Mar 17 17:56:25.080140 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 17 17:56:25.080153 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:56:25.080166 kernel: Console: colour dummy device 80x25 Mar 17 17:56:25.080180 kernel: printk: console [tty1] enabled Mar 17 17:56:25.080193 kernel: printk: console [ttyS0] enabled Mar 17 17:56:25.080212 kernel: printk: bootconsole [earlyser0] disabled Mar 17 17:56:25.080224 kernel: ACPI: Core revision 20230628 Mar 17 17:56:25.080238 kernel: Failed to register legacy timer interrupt Mar 17 17:56:25.080251 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:56:25.080270 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:56:25.080286 kernel: Hyper-V: Using IPI hypercalls Mar 17 17:56:25.080301 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 17 17:56:25.080316 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 17 17:56:25.080331 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 17 17:56:25.080347 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 17 17:56:25.080362 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 17 17:56:25.080378 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 17 17:56:25.080393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 17 17:56:25.080411 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 17:56:25.080425 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 17:56:25.080440 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:56:25.080455 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:56:25.080469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:56:25.080484 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:56:25.080499 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 17:56:25.080514 kernel: RETBleed: Vulnerable Mar 17 17:56:25.080529 kernel: Speculative Store Bypass: Vulnerable Mar 17 17:56:25.080543 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:56:25.080561 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:56:25.080576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:56:25.080591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:56:25.080606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:56:25.080621 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 17:56:25.080635 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 17:56:25.080650 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 17:56:25.080664 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:56:25.080679 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 17 17:56:25.080694 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 17 17:56:25.080708 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 17 17:56:25.080725 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 17 17:56:25.080740 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:56:25.080755 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:56:25.080769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:56:25.080783 kernel: landlock: Up and running. Mar 17 17:56:25.080798 kernel: SELinux: Initializing. Mar 17 17:56:25.080813 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.080828 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.080843 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 17:56:25.080858 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080873 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080891 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080906 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 17:56:25.080921 kernel: signal: max sigframe size: 3632 Mar 17 17:56:25.080936 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:56:25.080951 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:56:25.080966 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:56:25.080980 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:56:25.080995 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:56:25.081010 kernel: .... node #0, CPUs: #1 Mar 17 17:56:25.081028 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 17 17:56:25.081044 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:56:25.081058 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:56:25.081073 kernel: smpboot: Max logical packages: 1 Mar 17 17:56:25.081088 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 17 17:56:25.081112 kernel: devtmpfs: initialized Mar 17 17:56:25.081124 kernel: x86/mm: Memory block size: 128MB Mar 17 17:56:25.081137 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 17 17:56:25.081154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:56:25.081167 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:56:25.081180 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:56:25.081195 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:56:25.081210 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:56:25.081225 kernel: audit: type=2000 audit(1742234184.027:1): state=initialized audit_enabled=0 res=1 Mar 17 17:56:25.081240 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:56:25.081255 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:56:25.081270 kernel: cpuidle: using governor menu Mar 17 17:56:25.081288 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:56:25.081303 kernel: dca service started, version 1.12.1 Mar 17 17:56:25.081318 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Mar 17 17:56:25.081333 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:56:25.081348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:56:25.081363 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:56:25.081378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:56:25.081393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:56:25.081407 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:56:25.081425 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:56:25.081440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:56:25.081455 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:56:25.081470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:56:25.081485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:56:25.081500 kernel: ACPI: Interpreter enabled Mar 17 17:56:25.081515 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:56:25.081530 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:56:25.081545 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:56:25.081563 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 17 17:56:25.081578 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 17 17:56:25.081593 kernel: iommu: Default domain type: Translated Mar 17 17:56:25.081608 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:56:25.081623 kernel: efivars: Registered efivars operations Mar 17 17:56:25.081638 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:56:25.081652 kernel: PCI: System does not support PCI Mar 17 17:56:25.081667 kernel: vgaarb: loaded Mar 17 17:56:25.081681 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 17 17:56:25.081699 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:56:25.081714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:56:25.081729 kernel: pnp: PnP ACPI init Mar 17 17:56:25.081744 kernel: pnp: PnP ACPI: found 3 devices Mar 17 17:56:25.081759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:56:25.081774 kernel: NET: Registered PF_INET protocol family Mar 17 17:56:25.081789 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:56:25.081804 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 17:56:25.081820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:56:25.081838 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:56:25.081853 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 17 17:56:25.081867 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 17:56:25.081882 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.081897 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.081912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:56:25.081927 kernel: NET: Registered PF_XDP protocol family Mar 17 17:56:25.081942 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:56:25.081957 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 17:56:25.081974 kernel: software IO TLB: mapped [mem 0x000000003ad8e000-0x000000003ed8e000] (64MB) Mar 17 17:56:25.081989 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:56:25.082005 kernel: Initialise system trusted keyrings Mar 17 17:56:25.082019 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 17:56:25.082034 kernel: Key type asymmetric registered Mar 17 17:56:25.082049 kernel: Asymmetric key parser 'x509' registered Mar 17 17:56:25.082063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:56:25.082078 kernel: io scheduler mq-deadline registered Mar 17 17:56:25.085176 kernel: io scheduler kyber registered Mar 17 17:56:25.085209 kernel: io scheduler bfq registered Mar 17 17:56:25.085225 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:56:25.085241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:56:25.085256 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:56:25.085271 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 17:56:25.085286 kernel: i8042: PNP: No PS/2 controller found. Mar 17 17:56:25.085479 kernel: rtc_cmos 00:02: registered as rtc0 Mar 17 17:56:25.085605 kernel: rtc_cmos 00:02: setting system clock to 2025-03-17T17:56:24 UTC (1742234184) Mar 17 17:56:25.085727 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 17 17:56:25.085747 kernel: intel_pstate: CPU model not supported Mar 17 17:56:25.085762 kernel: efifb: probing for efifb Mar 17 17:56:25.085776 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:56:25.085791 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:56:25.085806 kernel: efifb: scrolling: redraw Mar 17 17:56:25.085819 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:56:25.085834 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:56:25.085851 kernel: fb0: EFI VGA frame buffer device Mar 17 17:56:25.085866 kernel: pstore: Using crash dump compression: deflate Mar 17 17:56:25.085882 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:56:25.085897 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:56:25.085912 kernel: Segment Routing with IPv6 Mar 17 17:56:25.085927 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:56:25.085942 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:56:25.085957 kernel: Key type dns_resolver registered Mar 17 17:56:25.085971 kernel: IPI shorthand broadcast: enabled Mar 17 17:56:25.085986 kernel: sched_clock: Marking stable (820003300, 44823900)->(1063631600, -198804400) Mar 17 17:56:25.086004 kernel: registered taskstats version 1 Mar 17 17:56:25.086019 kernel: Loading compiled-in X.509 certificates Mar 17 17:56:25.086034 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:56:25.086049 kernel: Key type .fscrypt registered Mar 17 17:56:25.086064 kernel: Key type fscrypt-provisioning registered Mar 17 17:56:25.086079 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:56:25.086106 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:56:25.086121 kernel: ima: No architecture policies found Mar 17 17:56:25.086139 kernel: clk: Disabling unused clocks Mar 17 17:56:25.086154 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:56:25.086169 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:56:25.086184 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:56:25.086199 kernel: Run /init as init process Mar 17 17:56:25.086214 kernel: with arguments: Mar 17 17:56:25.086229 kernel: /init Mar 17 17:56:25.086243 kernel: with environment: Mar 17 17:56:25.086257 kernel: HOME=/ Mar 17 17:56:25.086272 kernel: TERM=linux Mar 17 17:56:25.086289 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:56:25.086307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:56:25.086326 systemd[1]: Detected virtualization microsoft. Mar 17 17:56:25.086342 systemd[1]: Detected architecture x86-64. Mar 17 17:56:25.086357 systemd[1]: Running in initrd. Mar 17 17:56:25.086372 systemd[1]: No hostname configured, using default hostname. Mar 17 17:56:25.086387 systemd[1]: Hostname set to . Mar 17 17:56:25.086406 systemd[1]: Initializing machine ID from random generator. Mar 17 17:56:25.086422 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:56:25.086437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:56:25.086453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:56:25.086469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:56:25.086485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:56:25.086502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:56:25.086518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:56:25.086539 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:56:25.086555 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:56:25.086571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:56:25.086587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:56:25.086602 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:56:25.086618 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:56:25.086633 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:56:25.086652 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:56:25.086668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:56:25.086683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:56:25.086699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:56:25.086715 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:56:25.086731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:56:25.086747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:56:25.086763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:56:25.086782 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:56:25.086798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:56:25.086814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:56:25.086830 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:56:25.086846 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:56:25.086861 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:56:25.086877 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:56:25.086920 systemd-journald[177]: Collecting audit messages is disabled. Mar 17 17:56:25.086960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:25.086977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:56:25.086992 systemd-journald[177]: Journal started Mar 17 17:56:25.087030 systemd-journald[177]: Runtime Journal (/run/log/journal/e06814ede8cb4f27ad0e750f06dd7fce) is 8.0M, max 158.8M, 150.8M free. Mar 17 17:56:25.071261 systemd-modules-load[178]: Inserted module 'overlay' Mar 17 17:56:25.093107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:56:25.102265 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:56:25.102850 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:56:25.107226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:25.121129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:56:25.122345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:25.126912 kernel: Bridge firewalling registered Mar 17 17:56:25.123733 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 17 17:56:25.131957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:56:25.138817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:56:25.140073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:56:25.149226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:56:25.152151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:56:25.155207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:56:25.175560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:25.179411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:25.187700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:56:25.198307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:56:25.203266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:56:25.210403 dracut-cmdline[213]: dracut-dracut-053 Mar 17 17:56:25.213858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:56:25.218911 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.268821 systemd-resolved[224]: Positive Trust Anchors: Mar 17 17:56:25.271175 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:56:25.271237 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:56:25.294527 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 17 17:56:25.295802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:56:25.303625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:56:25.315116 kernel: SCSI subsystem initialized Mar 17 17:56:25.325112 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:56:25.336118 kernel: iscsi: registered transport (tcp) Mar 17 17:56:25.357663 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:56:25.357754 kernel: QLogic iSCSI HBA Driver Mar 17 17:56:25.393311 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:56:25.401256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:56:25.429516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:56:25.429629 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:56:25.432755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:56:25.473116 kernel: raid6: avx512x4 gen() 18590 MB/s Mar 17 17:56:25.493108 kernel: raid6: avx512x2 gen() 18555 MB/s Mar 17 17:56:25.512106 kernel: raid6: avx512x1 gen() 18285 MB/s Mar 17 17:56:25.531105 kernel: raid6: avx2x4 gen() 18363 MB/s Mar 17 17:56:25.550112 kernel: raid6: avx2x2 gen() 18333 MB/s Mar 17 17:56:25.570297 kernel: raid6: avx2x1 gen() 13917 MB/s Mar 17 17:56:25.570332 kernel: raid6: using algorithm avx512x4 gen() 18590 MB/s Mar 17 17:56:25.590678 kernel: raid6: .... xor() 7138 MB/s, rmw enabled Mar 17 17:56:25.590719 kernel: raid6: using avx512x2 recovery algorithm Mar 17 17:56:25.613122 kernel: xor: automatically using best checksumming function avx Mar 17 17:56:25.765138 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:56:25.774836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:56:25.784250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:56:25.797548 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 17 17:56:25.802027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:56:25.817316 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:56:25.830676 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 17 17:56:25.857617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:56:25.869505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:56:25.908476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:56:25.921261 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:56:25.938976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:56:25.948207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:56:25.954836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:56:25.960982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:56:25.973303 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:56:25.985115 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:56:26.008857 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:56:26.026953 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:56:26.027018 kernel: AES CTR mode by8 optimization enabled Mar 17 17:56:26.031129 kernel: hv_vmbus: Vmbus version:5.2 Mar 17 17:56:26.036869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:56:26.037051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:26.047894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:26.055149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:26.058218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:26.073880 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:56:26.073914 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 17:56:26.062175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:26.079389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:26.088502 kernel: PTP clock support registered Mar 17 17:56:26.105115 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:56:26.105163 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:56:26.109193 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:56:26.677216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:56:26.677244 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:56:26.677257 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:56:26.677268 kernel: scsi host0: storvsc_host_t Mar 17 17:56:26.677537 kernel: scsi host1: storvsc_host_t Mar 17 17:56:26.674964 systemd-resolved[224]: Clock change detected. Flushing caches. Mar 17 17:56:26.693173 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:56:26.696653 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:56:26.701485 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:56:26.701526 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:56:26.710900 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:56:26.715042 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:56:26.714464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:26.732756 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:26.755961 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:56:26.755996 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:56:26.756018 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:56:26.767998 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:56:26.775266 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:56:26.775291 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:56:26.772137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:26.794334 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:56:26.807504 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:56:26.807719 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:56:26.807899 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:56:26.808060 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:56:26.808232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:26.808260 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:56:26.916591 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: VF slot 1 added Mar 17 17:56:26.925642 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:56:26.930463 kernel: hv_pci b5c7c53e-f5f4-462e-bdf5-988b62020a52: PCI VMBus probing: Using version 0x10004 Mar 17 17:56:27.009466 kernel: hv_pci b5c7c53e-f5f4-462e-bdf5-988b62020a52: PCI host bridge to bus f5f4:00 Mar 17 17:56:27.010088 kernel: pci_bus f5f4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 17 17:56:27.010288 kernel: pci_bus f5f4:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:56:27.010450 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (453) Mar 17 17:56:27.010472 kernel: pci f5f4:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 17 17:56:27.010987 kernel: pci f5f4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:56:27.011156 kernel: pci f5f4:00:02.0: enabling Extended Tags Mar 17 17:56:27.011325 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (465) Mar 17 17:56:27.011347 kernel: pci f5f4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f5f4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 17 17:56:27.011512 kernel: pci_bus f5f4:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:56:27.012134 kernel: pci f5f4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:56:26.954349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:56:26.999533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:56:27.014340 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:56:27.032185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:56:27.035283 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:56:27.050856 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:56:27.071592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:27.078592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:27.325204 kernel: mlx5_core f5f4:00:02.0: enabling device (0000 -> 0002) Mar 17 17:56:27.567243 kernel: mlx5_core f5f4:00:02.0: firmware version: 14.30.5000 Mar 17 17:56:27.567475 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: VF registering: eth1 Mar 17 17:56:27.567655 kernel: mlx5_core f5f4:00:02.0 eth1: joined to eth0 Mar 17 17:56:27.567831 kernel: mlx5_core f5f4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 17 17:56:27.574589 kernel: mlx5_core f5f4:00:02.0 enP62964s1: renamed from eth1 Mar 17 17:56:28.081645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:28.083421 disk-uuid[592]: The operation has completed successfully. Mar 17 17:56:28.159264 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:56:28.159387 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:56:28.188776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:56:28.195402 sh[688]: Success Mar 17 17:56:28.213601 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:56:28.288336 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:56:28.299684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:56:28.305066 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:56:28.323976 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:56:28.324061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:28.327586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:56:28.330514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:56:28.332964 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:56:28.409117 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:56:28.412292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:56:28.424864 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:56:28.429755 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:56:28.454090 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:28.454172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:28.454193 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:28.462600 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:28.473627 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:56:28.481596 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:28.488335 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:56:28.498777 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:56:28.530696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:56:28.539884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:56:28.576012 systemd-networkd[872]: lo: Link UP Mar 17 17:56:28.576021 systemd-networkd[872]: lo: Gained carrier Mar 17 17:56:28.578212 systemd-networkd[872]: Enumeration completed Mar 17 17:56:28.578327 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:56:28.579397 systemd[1]: Reached target network.target - Network. Mar 17 17:56:28.580996 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:28.581001 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:56:28.650596 kernel: mlx5_core f5f4:00:02.0 enP62964s1: Link up Mar 17 17:56:28.689640 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: Data path switched to VF: enP62964s1 Mar 17 17:56:28.690305 systemd-networkd[872]: enP62964s1: Link UP Mar 17 17:56:28.690433 systemd-networkd[872]: eth0: Link UP Mar 17 17:56:28.690615 systemd-networkd[872]: eth0: Gained carrier Mar 17 17:56:28.690629 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:28.706002 systemd-networkd[872]: enP62964s1: Gained carrier Mar 17 17:56:28.742890 ignition[826]: Ignition 2.20.0 Mar 17 17:56:28.742964 ignition[826]: Stage: fetch-offline Mar 17 17:56:28.743024 ignition[826]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.745966 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 17:56:28.743037 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.749947 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:56:28.743172 ignition[826]: parsed url from cmdline: "" Mar 17 17:56:28.743178 ignition[826]: no config URL provided Mar 17 17:56:28.743185 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:56:28.743196 ignition[826]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:56:28.743206 ignition[826]: failed to fetch config: resource requires networking Mar 17 17:56:28.767732 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:56:28.744968 ignition[826]: Ignition finished successfully Mar 17 17:56:28.785332 ignition[882]: Ignition 2.20.0 Mar 17 17:56:28.785344 ignition[882]: Stage: fetch Mar 17 17:56:28.785586 ignition[882]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.785603 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.785703 ignition[882]: parsed url from cmdline: "" Mar 17 17:56:28.785706 ignition[882]: no config URL provided Mar 17 17:56:28.785710 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:56:28.785717 ignition[882]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:56:28.785746 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:56:28.861732 ignition[882]: GET result: OK Mar 17 17:56:28.861813 ignition[882]: config has been read from IMDS userdata Mar 17 17:56:28.861836 ignition[882]: parsing config with SHA512: 943e9b5b562b0690b1228e2c8353441458b06af20ee492ea3056b975e298ba519595f9dfe872ccc3598a79658dc1f338363ef3255e83aae7b21e1a6712096786 Mar 17 17:56:28.866244 unknown[882]: fetched base config from "system" Mar 17 17:56:28.866263 unknown[882]: fetched base config from "system" Mar 17 17:56:28.866689 ignition[882]: fetch: fetch complete Mar 17 17:56:28.866272 unknown[882]: fetched user config from "azure" Mar 17 17:56:28.866696 ignition[882]: fetch: fetch passed Mar 17 17:56:28.869167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:56:28.866754 ignition[882]: Ignition finished successfully Mar 17 17:56:28.882840 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:56:28.898122 ignition[888]: Ignition 2.20.0 Mar 17 17:56:28.898133 ignition[888]: Stage: kargs Mar 17 17:56:28.898347 ignition[888]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.898361 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.899140 ignition[888]: kargs: kargs passed Mar 17 17:56:28.899185 ignition[888]: Ignition finished successfully Mar 17 17:56:28.909681 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:56:28.918736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:56:28.932618 ignition[894]: Ignition 2.20.0 Mar 17 17:56:28.932630 ignition[894]: Stage: disks Mar 17 17:56:28.934454 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:56:28.932856 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.937671 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:56:28.932869 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.941168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:56:28.933618 ignition[894]: disks: disks passed Mar 17 17:56:28.944248 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:56:28.933662 ignition[894]: Ignition finished successfully Mar 17 17:56:28.949335 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:56:28.953965 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:56:28.972411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:56:28.994282 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:56:28.999543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:56:29.014703 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:56:29.109599 kernel: EXT4-fs (sda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:56:29.110013 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:56:29.114623 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:56:29.132819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:56:29.138124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:56:29.145776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:56:29.157790 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (913) Mar 17 17:56:29.150146 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:56:29.150179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:56:29.154514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:56:29.168717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:56:29.184558 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.184630 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:29.184644 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:29.189330 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:29.189728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:56:29.365268 coreos-metadata[915]: Mar 17 17:56:29.365 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:56:29.370090 coreos-metadata[915]: Mar 17 17:56:29.367 INFO Fetch successful Mar 17 17:56:29.370090 coreos-metadata[915]: Mar 17 17:56:29.367 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:56:29.378852 coreos-metadata[915]: Mar 17 17:56:29.378 INFO Fetch successful Mar 17 17:56:29.382666 coreos-metadata[915]: Mar 17 17:56:29.382 INFO wrote hostname ci-4152.2.2-a-99edcdcd5a to /sysroot/etc/hostname Mar 17 17:56:29.384566 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:56:29.402753 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:56:29.416414 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:56:29.425392 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:56:29.432829 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:56:29.693107 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:56:29.704711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:56:29.709746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:56:29.721706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:56:29.727514 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.748638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:56:29.757410 ignition[1032]: INFO : Ignition 2.20.0 Mar 17 17:56:29.757410 ignition[1032]: INFO : Stage: mount Mar 17 17:56:29.763583 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:29.763583 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:29.763583 ignition[1032]: INFO : mount: mount passed Mar 17 17:56:29.763583 ignition[1032]: INFO : Ignition finished successfully Mar 17 17:56:29.759404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:56:29.775798 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:56:29.788773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:56:29.810594 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1043) Mar 17 17:56:29.814589 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.814635 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:29.818898 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:29.824589 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:29.825663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:56:29.847189 ignition[1059]: INFO : Ignition 2.20.0 Mar 17 17:56:29.847189 ignition[1059]: INFO : Stage: files Mar 17 17:56:29.851281 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:29.851281 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:29.851281 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:56:29.859449 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:56:29.859449 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:56:29.882195 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:56:29.886071 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:56:29.889853 unknown[1059]: wrote ssh authorized keys file for user: core Mar 17 17:56:29.892516 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:56:29.896456 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:56:29.900726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:56:30.443724 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Mar 17 17:56:30.481868 systemd-networkd[872]: enP62964s1: Gained IPv6LL Mar 17 17:56:30.545722 systemd-networkd[872]: eth0: Gained IPv6LL Mar 17 17:56:30.736781 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:30.736781 ignition[1059]: INFO : files: op(8): [started] processing unit "containerd.service" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): [finished] processing unit "containerd.service" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: files passed Mar 17 17:56:30.748203 ignition[1059]: INFO : Ignition finished successfully Mar 17 17:56:30.739358 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:56:30.766458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:56:30.774810 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:56:30.778510 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:56:30.782439 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:56:30.801213 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.801213 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.809166 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.805386 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:56:30.812811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:56:30.829732 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:56:30.854339 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:56:30.854458 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:56:30.860699 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:56:30.866486 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:56:30.869219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:56:30.877997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:56:30.889799 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:56:30.900848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:56:30.912259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:56:30.913358 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:56:30.913772 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:56:30.914159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:56:30.914269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:56:30.915422 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:56:30.916300 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:56:30.916717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:56:30.917135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:56:30.917543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:56:30.917973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:56:30.918386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:56:30.918826 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:56:30.919216 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:56:30.919634 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:56:30.920014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:56:30.920148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:56:30.920906 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:56:30.921343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:56:30.921686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:56:30.958309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:56:30.961704 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:56:30.961877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:56:31.015823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:56:31.016117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:56:31.022192 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:56:31.022351 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:56:31.029640 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:56:31.031457 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:56:31.044830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:56:31.049344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:56:31.049518 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:56:31.068119 ignition[1112]: INFO : Ignition 2.20.0 Mar 17 17:56:31.068119 ignition[1112]: INFO : Stage: umount Mar 17 17:56:31.076904 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:31.076904 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:31.076904 ignition[1112]: INFO : umount: umount passed Mar 17 17:56:31.076904 ignition[1112]: INFO : Ignition finished successfully Mar 17 17:56:31.070832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:56:31.074767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:56:31.074970 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:56:31.082020 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:56:31.082143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:56:31.091993 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:56:31.092095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:56:31.095873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:56:31.095959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:56:31.104286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:56:31.108142 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:56:31.108207 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:56:31.114922 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:56:31.114971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:56:31.120437 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:56:31.120484 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:56:31.125282 systemd[1]: Stopped target network.target - Network. Mar 17 17:56:31.129579 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:56:31.129657 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:56:31.135695 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:56:31.140524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:56:31.142633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:56:31.145735 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:56:31.148046 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:56:31.148897 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:56:31.148942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:56:31.149231 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:56:31.149263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:56:31.149618 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:56:31.149659 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:56:31.150079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:56:31.150113 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:56:31.150663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:56:31.150930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:56:31.171655 systemd-networkd[872]: eth0: DHCPv6 lease lost Mar 17 17:56:31.172672 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:56:31.172787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:56:31.177858 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:56:31.177962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:56:31.183396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:56:31.183440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:56:31.202445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:56:31.206909 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:56:31.206979 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:56:31.243684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:56:31.243777 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:31.248727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:56:31.251268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:56:31.270690 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:56:31.270775 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:56:31.276627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:56:31.292974 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:56:31.295354 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:56:31.296875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:56:31.296943 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:56:31.298158 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:56:31.298189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:56:31.298548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:56:31.298600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:56:31.299462 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:56:31.299498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:56:31.300742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:56:31.300780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:31.303723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:56:31.358053 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: Data path switched from VF: enP62964s1 Mar 17 17:56:31.304131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:56:31.304177 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:56:31.304625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:31.304659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:31.314502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:56:31.314614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:56:31.384720 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:56:31.384848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:56:31.722435 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:56:31.722565 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:56:31.727723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:56:31.734724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:56:31.734793 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:56:31.748755 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:56:31.930093 systemd[1]: Switching root. Mar 17 17:56:31.963473 systemd-journald[177]: Journal stopped Mar 17 17:56:25.071913 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:56:25.071952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.071967 kernel: BIOS-provided physical RAM map: Mar 17 17:56:25.071978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:56:25.071988 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 17 17:56:25.071998 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 17 17:56:25.072011 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 17 17:56:25.072025 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 17 17:56:25.072035 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 17 17:56:25.072046 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 17 17:56:25.072057 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 17 17:56:25.072068 kernel: printk: bootconsole [earlyser0] enabled Mar 17 17:56:25.072078 kernel: NX (Execute Disable) protection: active Mar 17 17:56:25.072090 kernel: APIC: Static calls initialized Mar 17 17:56:25.072121 kernel: efi: EFI v2.7 by Microsoft Mar 17 17:56:25.072133 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Mar 17 17:56:25.072146 kernel: random: crng init done Mar 17 17:56:25.072158 kernel: secureboot: Secure boot disabled Mar 17 17:56:25.072170 kernel: SMBIOS 3.1.0 present. Mar 17 17:56:25.072182 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 17 17:56:25.072194 kernel: Hypervisor detected: Microsoft Hyper-V Mar 17 17:56:25.072207 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 17 17:56:25.072219 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Mar 17 17:56:25.072231 kernel: Hyper-V: Nested features: 0x1e0101 Mar 17 17:56:25.072245 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 17 17:56:25.072257 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 17 17:56:25.072269 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:56:25.072281 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 17:56:25.072294 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 17 17:56:25.072306 kernel: tsc: Detected 2593.907 MHz processor Mar 17 17:56:25.072318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:56:25.072331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:56:25.072343 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 17 17:56:25.072357 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:56:25.072370 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:56:25.072382 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 17 17:56:25.072394 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 17 17:56:25.072406 kernel: Using GB pages for direct mapping Mar 17 17:56:25.072419 kernel: ACPI: Early table checksum verification disabled Mar 17 17:56:25.072431 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 17 17:56:25.072448 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072463 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072476 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 17 17:56:25.072489 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 17 17:56:25.072502 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072516 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072529 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072544 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072557 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072570 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072583 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:56:25.072596 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 17 17:56:25.072609 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 17 17:56:25.072622 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 17 17:56:25.072635 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 17 17:56:25.072646 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 17 17:56:25.072660 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 17 17:56:25.072673 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 17 17:56:25.072686 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 17 17:56:25.072699 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 17 17:56:25.072710 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 17 17:56:25.072737 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:56:25.072750 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:56:25.072762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 17 17:56:25.072774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 17 17:56:25.072791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 17 17:56:25.072805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 17 17:56:25.072816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 17 17:56:25.072833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 17 17:56:25.072845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 17 17:56:25.072858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 17 17:56:25.072871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 17 17:56:25.072885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 17 17:56:25.072902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 17 17:56:25.072916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 17 17:56:25.072930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 17 17:56:25.072944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 17 17:56:25.072958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 17 17:56:25.072972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 17 17:56:25.072986 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 17 17:56:25.073000 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 17 17:56:25.073014 kernel: Zone ranges: Mar 17 17:56:25.073031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:56:25.073045 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 17:56:25.073058 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:56:25.073071 kernel: Movable zone start for each node Mar 17 17:56:25.073084 kernel: Early memory node ranges Mar 17 17:56:25.079382 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:56:25.079408 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 17 17:56:25.079423 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 17 17:56:25.079437 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 17:56:25.079455 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 17 17:56:25.079469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:56:25.079484 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:56:25.079498 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 17 17:56:25.079509 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 17 17:56:25.079524 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 17 17:56:25.079537 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:56:25.079551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:56:25.079564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:56:25.079583 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 17 17:56:25.079595 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:56:25.079609 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 17 17:56:25.079624 kernel: Booting paravirtualized kernel on Hyper-V Mar 17 17:56:25.079637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:56:25.079650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:56:25.079662 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:56:25.079674 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:56:25.079685 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:56:25.079703 kernel: Hyper-V: PV spinlocks enabled Mar 17 17:56:25.079717 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:56:25.079731 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.079746 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:56:25.079760 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 17:56:25.079774 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:56:25.079786 kernel: Fallback order for Node 0: 0 Mar 17 17:56:25.079797 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 17 17:56:25.079813 kernel: Policy zone: Normal Mar 17 17:56:25.079838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:56:25.079850 kernel: software IO TLB: area num 2. Mar 17 17:56:25.079866 kernel: Memory: 8077032K/8387460K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 310172K reserved, 0K cma-reserved) Mar 17 17:56:25.079880 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:56:25.079894 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:56:25.079908 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:56:25.079922 kernel: Dynamic Preempt: voluntary Mar 17 17:56:25.079936 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:56:25.079950 kernel: rcu: RCU event tracing is enabled. Mar 17 17:56:25.079964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:56:25.079982 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:56:25.079996 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:56:25.080009 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:56:25.080023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:56:25.080037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:56:25.080092 kernel: Using NULL legacy PIC Mar 17 17:56:25.080140 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 17 17:56:25.080153 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:56:25.080166 kernel: Console: colour dummy device 80x25 Mar 17 17:56:25.080180 kernel: printk: console [tty1] enabled Mar 17 17:56:25.080193 kernel: printk: console [ttyS0] enabled Mar 17 17:56:25.080212 kernel: printk: bootconsole [earlyser0] disabled Mar 17 17:56:25.080224 kernel: ACPI: Core revision 20230628 Mar 17 17:56:25.080238 kernel: Failed to register legacy timer interrupt Mar 17 17:56:25.080251 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:56:25.080270 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:56:25.080286 kernel: Hyper-V: Using IPI hypercalls Mar 17 17:56:25.080301 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 17 17:56:25.080316 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 17 17:56:25.080331 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 17 17:56:25.080347 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 17 17:56:25.080362 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 17 17:56:25.080378 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 17 17:56:25.080393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 17 17:56:25.080411 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 17:56:25.080425 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 17:56:25.080440 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:56:25.080455 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:56:25.080469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:56:25.080484 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:56:25.080499 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 17:56:25.080514 kernel: RETBleed: Vulnerable Mar 17 17:56:25.080529 kernel: Speculative Store Bypass: Vulnerable Mar 17 17:56:25.080543 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:56:25.080561 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:56:25.080576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:56:25.080591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:56:25.080606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:56:25.080621 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 17:56:25.080635 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 17:56:25.080650 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 17:56:25.080664 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:56:25.080679 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 17 17:56:25.080694 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 17 17:56:25.080708 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 17 17:56:25.080725 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 17 17:56:25.080740 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:56:25.080755 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:56:25.080769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:56:25.080783 kernel: landlock: Up and running. Mar 17 17:56:25.080798 kernel: SELinux: Initializing. Mar 17 17:56:25.080813 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.080828 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.080843 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 17:56:25.080858 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080873 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080891 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:56:25.080906 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 17:56:25.080921 kernel: signal: max sigframe size: 3632 Mar 17 17:56:25.080936 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:56:25.080951 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:56:25.080966 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:56:25.080980 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:56:25.080995 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:56:25.081010 kernel: .... node #0, CPUs: #1 Mar 17 17:56:25.081028 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 17 17:56:25.081044 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:56:25.081058 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:56:25.081073 kernel: smpboot: Max logical packages: 1 Mar 17 17:56:25.081088 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 17 17:56:25.081112 kernel: devtmpfs: initialized Mar 17 17:56:25.081124 kernel: x86/mm: Memory block size: 128MB Mar 17 17:56:25.081137 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 17 17:56:25.081154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:56:25.081167 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:56:25.081180 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:56:25.081195 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:56:25.081210 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:56:25.081225 kernel: audit: type=2000 audit(1742234184.027:1): state=initialized audit_enabled=0 res=1 Mar 17 17:56:25.081240 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:56:25.081255 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:56:25.081270 kernel: cpuidle: using governor menu Mar 17 17:56:25.081288 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:56:25.081303 kernel: dca service started, version 1.12.1 Mar 17 17:56:25.081318 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Mar 17 17:56:25.081333 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:56:25.081348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:56:25.081363 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:56:25.081378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:56:25.081393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:56:25.081407 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:56:25.081425 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:56:25.081440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:56:25.081455 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:56:25.081470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:56:25.081485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:56:25.081500 kernel: ACPI: Interpreter enabled Mar 17 17:56:25.081515 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:56:25.081530 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:56:25.081545 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:56:25.081563 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 17 17:56:25.081578 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 17 17:56:25.081593 kernel: iommu: Default domain type: Translated Mar 17 17:56:25.081608 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:56:25.081623 kernel: efivars: Registered efivars operations Mar 17 17:56:25.081638 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:56:25.081652 kernel: PCI: System does not support PCI Mar 17 17:56:25.081667 kernel: vgaarb: loaded Mar 17 17:56:25.081681 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 17 17:56:25.081699 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:56:25.081714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:56:25.081729 kernel: pnp: PnP ACPI init Mar 17 17:56:25.081744 kernel: pnp: PnP ACPI: found 3 devices Mar 17 17:56:25.081759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:56:25.081774 kernel: NET: Registered PF_INET protocol family Mar 17 17:56:25.081789 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:56:25.081804 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 17:56:25.081820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:56:25.081838 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:56:25.081853 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 17 17:56:25.081867 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 17:56:25.081882 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.081897 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:56:25.081912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:56:25.081927 kernel: NET: Registered PF_XDP protocol family Mar 17 17:56:25.081942 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:56:25.081957 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 17:56:25.081974 kernel: software IO TLB: mapped [mem 0x000000003ad8e000-0x000000003ed8e000] (64MB) Mar 17 17:56:25.081989 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:56:25.082005 kernel: Initialise system trusted keyrings Mar 17 17:56:25.082019 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 17:56:25.082034 kernel: Key type asymmetric registered Mar 17 17:56:25.082049 kernel: Asymmetric key parser 'x509' registered Mar 17 17:56:25.082063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:56:25.082078 kernel: io scheduler mq-deadline registered Mar 17 17:56:25.085176 kernel: io scheduler kyber registered Mar 17 17:56:25.085209 kernel: io scheduler bfq registered Mar 17 17:56:25.085225 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:56:25.085241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:56:25.085256 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:56:25.085271 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 17:56:25.085286 kernel: i8042: PNP: No PS/2 controller found. Mar 17 17:56:25.085479 kernel: rtc_cmos 00:02: registered as rtc0 Mar 17 17:56:25.085605 kernel: rtc_cmos 00:02: setting system clock to 2025-03-17T17:56:24 UTC (1742234184) Mar 17 17:56:25.085727 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 17 17:56:25.085747 kernel: intel_pstate: CPU model not supported Mar 17 17:56:25.085762 kernel: efifb: probing for efifb Mar 17 17:56:25.085776 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:56:25.085791 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:56:25.085806 kernel: efifb: scrolling: redraw Mar 17 17:56:25.085819 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:56:25.085834 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:56:25.085851 kernel: fb0: EFI VGA frame buffer device Mar 17 17:56:25.085866 kernel: pstore: Using crash dump compression: deflate Mar 17 17:56:25.085882 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:56:25.085897 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:56:25.085912 kernel: Segment Routing with IPv6 Mar 17 17:56:25.085927 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:56:25.085942 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:56:25.085957 kernel: Key type dns_resolver registered Mar 17 17:56:25.085971 kernel: IPI shorthand broadcast: enabled Mar 17 17:56:25.085986 kernel: sched_clock: Marking stable (820003300, 44823900)->(1063631600, -198804400) Mar 17 17:56:25.086004 kernel: registered taskstats version 1 Mar 17 17:56:25.086019 kernel: Loading compiled-in X.509 certificates Mar 17 17:56:25.086034 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:56:25.086049 kernel: Key type .fscrypt registered Mar 17 17:56:25.086064 kernel: Key type fscrypt-provisioning registered Mar 17 17:56:25.086079 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:56:25.086106 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:56:25.086121 kernel: ima: No architecture policies found Mar 17 17:56:25.086139 kernel: clk: Disabling unused clocks Mar 17 17:56:25.086154 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:56:25.086169 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:56:25.086184 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:56:25.086199 kernel: Run /init as init process Mar 17 17:56:25.086214 kernel: with arguments: Mar 17 17:56:25.086229 kernel: /init Mar 17 17:56:25.086243 kernel: with environment: Mar 17 17:56:25.086257 kernel: HOME=/ Mar 17 17:56:25.086272 kernel: TERM=linux Mar 17 17:56:25.086289 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:56:25.086307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:56:25.086326 systemd[1]: Detected virtualization microsoft. Mar 17 17:56:25.086342 systemd[1]: Detected architecture x86-64. Mar 17 17:56:25.086357 systemd[1]: Running in initrd. Mar 17 17:56:25.086372 systemd[1]: No hostname configured, using default hostname. Mar 17 17:56:25.086387 systemd[1]: Hostname set to . Mar 17 17:56:25.086406 systemd[1]: Initializing machine ID from random generator. Mar 17 17:56:25.086422 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:56:25.086437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:56:25.086453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:56:25.086469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:56:25.086485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:56:25.086502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:56:25.086518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:56:25.086539 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:56:25.086555 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:56:25.086571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:56:25.086587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:56:25.086602 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:56:25.086618 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:56:25.086633 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:56:25.086652 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:56:25.086668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:56:25.086683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:56:25.086699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:56:25.086715 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:56:25.086731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:56:25.086747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:56:25.086763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:56:25.086782 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:56:25.086798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:56:25.086814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:56:25.086830 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:56:25.086846 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:56:25.086861 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:56:25.086877 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:56:25.086920 systemd-journald[177]: Collecting audit messages is disabled. Mar 17 17:56:25.086960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:25.086977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:56:25.086992 systemd-journald[177]: Journal started Mar 17 17:56:25.087030 systemd-journald[177]: Runtime Journal (/run/log/journal/e06814ede8cb4f27ad0e750f06dd7fce) is 8.0M, max 158.8M, 150.8M free. Mar 17 17:56:25.071261 systemd-modules-load[178]: Inserted module 'overlay' Mar 17 17:56:25.093107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:56:25.102265 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:56:25.102850 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:56:25.107226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:25.121129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:56:25.122345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:25.126912 kernel: Bridge firewalling registered Mar 17 17:56:25.123733 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 17 17:56:25.131957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:56:25.138817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:56:25.140073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:56:25.149226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:56:25.152151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:56:25.155207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:56:25.175560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:25.179411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:25.187700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:56:25.198307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:56:25.203266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:56:25.210403 dracut-cmdline[213]: dracut-dracut-053 Mar 17 17:56:25.213858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:56:25.218911 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:56:25.268821 systemd-resolved[224]: Positive Trust Anchors: Mar 17 17:56:25.271175 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:56:25.271237 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:56:25.294527 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 17 17:56:25.295802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:56:25.303625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:56:25.315116 kernel: SCSI subsystem initialized Mar 17 17:56:25.325112 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:56:25.336118 kernel: iscsi: registered transport (tcp) Mar 17 17:56:25.357663 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:56:25.357754 kernel: QLogic iSCSI HBA Driver Mar 17 17:56:25.393311 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:56:25.401256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:56:25.429516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:56:25.429629 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:56:25.432755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:56:25.473116 kernel: raid6: avx512x4 gen() 18590 MB/s Mar 17 17:56:25.493108 kernel: raid6: avx512x2 gen() 18555 MB/s Mar 17 17:56:25.512106 kernel: raid6: avx512x1 gen() 18285 MB/s Mar 17 17:56:25.531105 kernel: raid6: avx2x4 gen() 18363 MB/s Mar 17 17:56:25.550112 kernel: raid6: avx2x2 gen() 18333 MB/s Mar 17 17:56:25.570297 kernel: raid6: avx2x1 gen() 13917 MB/s Mar 17 17:56:25.570332 kernel: raid6: using algorithm avx512x4 gen() 18590 MB/s Mar 17 17:56:25.590678 kernel: raid6: .... xor() 7138 MB/s, rmw enabled Mar 17 17:56:25.590719 kernel: raid6: using avx512x2 recovery algorithm Mar 17 17:56:25.613122 kernel: xor: automatically using best checksumming function avx Mar 17 17:56:25.765138 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:56:25.774836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:56:25.784250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:56:25.797548 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 17 17:56:25.802027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:56:25.817316 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:56:25.830676 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 17 17:56:25.857617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:56:25.869505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:56:25.908476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:56:25.921261 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:56:25.938976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:56:25.948207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:56:25.954836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:56:25.960982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:56:25.973303 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:56:25.985115 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:56:26.008857 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:56:26.026953 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:56:26.027018 kernel: AES CTR mode by8 optimization enabled Mar 17 17:56:26.031129 kernel: hv_vmbus: Vmbus version:5.2 Mar 17 17:56:26.036869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:56:26.037051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:26.047894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:26.055149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:26.058218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:26.073880 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:56:26.073914 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 17:56:26.062175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:26.079389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:26.088502 kernel: PTP clock support registered Mar 17 17:56:26.105115 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:56:26.105163 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:56:26.109193 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:56:26.677216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:56:26.677244 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:56:26.677257 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:56:26.677268 kernel: scsi host0: storvsc_host_t Mar 17 17:56:26.677537 kernel: scsi host1: storvsc_host_t Mar 17 17:56:26.674964 systemd-resolved[224]: Clock change detected. Flushing caches. Mar 17 17:56:26.693173 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:56:26.696653 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:56:26.701485 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:56:26.701526 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:56:26.710900 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:56:26.715042 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:56:26.714464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:26.732756 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:56:26.755961 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:56:26.755996 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:56:26.756018 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:56:26.767998 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:56:26.775266 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:56:26.775291 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:56:26.772137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:26.794334 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:56:26.807504 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:56:26.807719 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:56:26.807899 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:56:26.808060 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:56:26.808232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:26.808260 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:56:26.916591 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: VF slot 1 added Mar 17 17:56:26.925642 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:56:26.930463 kernel: hv_pci b5c7c53e-f5f4-462e-bdf5-988b62020a52: PCI VMBus probing: Using version 0x10004 Mar 17 17:56:27.009466 kernel: hv_pci b5c7c53e-f5f4-462e-bdf5-988b62020a52: PCI host bridge to bus f5f4:00 Mar 17 17:56:27.010088 kernel: pci_bus f5f4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 17 17:56:27.010288 kernel: pci_bus f5f4:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:56:27.010450 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (453) Mar 17 17:56:27.010472 kernel: pci f5f4:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 17 17:56:27.010987 kernel: pci f5f4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:56:27.011156 kernel: pci f5f4:00:02.0: enabling Extended Tags Mar 17 17:56:27.011325 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (465) Mar 17 17:56:27.011347 kernel: pci f5f4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f5f4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 17 17:56:27.011512 kernel: pci_bus f5f4:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:56:27.012134 kernel: pci f5f4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 17:56:26.954349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:56:26.999533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:56:27.014340 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:56:27.032185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:56:27.035283 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:56:27.050856 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:56:27.071592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:27.078592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:27.325204 kernel: mlx5_core f5f4:00:02.0: enabling device (0000 -> 0002) Mar 17 17:56:27.567243 kernel: mlx5_core f5f4:00:02.0: firmware version: 14.30.5000 Mar 17 17:56:27.567475 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: VF registering: eth1 Mar 17 17:56:27.567655 kernel: mlx5_core f5f4:00:02.0 eth1: joined to eth0 Mar 17 17:56:27.567831 kernel: mlx5_core f5f4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 17 17:56:27.574589 kernel: mlx5_core f5f4:00:02.0 enP62964s1: renamed from eth1 Mar 17 17:56:28.081645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:56:28.083421 disk-uuid[592]: The operation has completed successfully. Mar 17 17:56:28.159264 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:56:28.159387 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:56:28.188776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:56:28.195402 sh[688]: Success Mar 17 17:56:28.213601 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:56:28.288336 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:56:28.299684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:56:28.305066 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:56:28.323976 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:56:28.324061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:28.327586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:56:28.330514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:56:28.332964 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:56:28.409117 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:56:28.412292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:56:28.424864 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:56:28.429755 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:56:28.454090 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:28.454172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:28.454193 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:28.462600 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:28.473627 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:56:28.481596 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:28.488335 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:56:28.498777 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:56:28.530696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:56:28.539884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:56:28.576012 systemd-networkd[872]: lo: Link UP Mar 17 17:56:28.576021 systemd-networkd[872]: lo: Gained carrier Mar 17 17:56:28.578212 systemd-networkd[872]: Enumeration completed Mar 17 17:56:28.578327 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:56:28.579397 systemd[1]: Reached target network.target - Network. Mar 17 17:56:28.580996 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:28.581001 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:56:28.650596 kernel: mlx5_core f5f4:00:02.0 enP62964s1: Link up Mar 17 17:56:28.689640 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: Data path switched to VF: enP62964s1 Mar 17 17:56:28.690305 systemd-networkd[872]: enP62964s1: Link UP Mar 17 17:56:28.690433 systemd-networkd[872]: eth0: Link UP Mar 17 17:56:28.690615 systemd-networkd[872]: eth0: Gained carrier Mar 17 17:56:28.690629 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:28.706002 systemd-networkd[872]: enP62964s1: Gained carrier Mar 17 17:56:28.742890 ignition[826]: Ignition 2.20.0 Mar 17 17:56:28.742964 ignition[826]: Stage: fetch-offline Mar 17 17:56:28.743024 ignition[826]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.745966 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 17:56:28.743037 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.749947 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:56:28.743172 ignition[826]: parsed url from cmdline: "" Mar 17 17:56:28.743178 ignition[826]: no config URL provided Mar 17 17:56:28.743185 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:56:28.743196 ignition[826]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:56:28.743206 ignition[826]: failed to fetch config: resource requires networking Mar 17 17:56:28.767732 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:56:28.744968 ignition[826]: Ignition finished successfully Mar 17 17:56:28.785332 ignition[882]: Ignition 2.20.0 Mar 17 17:56:28.785344 ignition[882]: Stage: fetch Mar 17 17:56:28.785586 ignition[882]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.785603 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.785703 ignition[882]: parsed url from cmdline: "" Mar 17 17:56:28.785706 ignition[882]: no config URL provided Mar 17 17:56:28.785710 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:56:28.785717 ignition[882]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:56:28.785746 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:56:28.861732 ignition[882]: GET result: OK Mar 17 17:56:28.861813 ignition[882]: config has been read from IMDS userdata Mar 17 17:56:28.861836 ignition[882]: parsing config with SHA512: 943e9b5b562b0690b1228e2c8353441458b06af20ee492ea3056b975e298ba519595f9dfe872ccc3598a79658dc1f338363ef3255e83aae7b21e1a6712096786 Mar 17 17:56:28.866244 unknown[882]: fetched base config from "system" Mar 17 17:56:28.866263 unknown[882]: fetched base config from "system" Mar 17 17:56:28.866689 ignition[882]: fetch: fetch complete Mar 17 17:56:28.866272 unknown[882]: fetched user config from "azure" Mar 17 17:56:28.866696 ignition[882]: fetch: fetch passed Mar 17 17:56:28.869167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:56:28.866754 ignition[882]: Ignition finished successfully Mar 17 17:56:28.882840 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:56:28.898122 ignition[888]: Ignition 2.20.0 Mar 17 17:56:28.898133 ignition[888]: Stage: kargs Mar 17 17:56:28.898347 ignition[888]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.898361 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.899140 ignition[888]: kargs: kargs passed Mar 17 17:56:28.899185 ignition[888]: Ignition finished successfully Mar 17 17:56:28.909681 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:56:28.918736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:56:28.932618 ignition[894]: Ignition 2.20.0 Mar 17 17:56:28.932630 ignition[894]: Stage: disks Mar 17 17:56:28.934454 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:56:28.932856 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:28.937671 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:56:28.932869 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:28.941168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:56:28.933618 ignition[894]: disks: disks passed Mar 17 17:56:28.944248 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:56:28.933662 ignition[894]: Ignition finished successfully Mar 17 17:56:28.949335 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:56:28.953965 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:56:28.972411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:56:28.994282 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:56:28.999543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:56:29.014703 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:56:29.109599 kernel: EXT4-fs (sda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:56:29.110013 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:56:29.114623 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:56:29.132819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:56:29.138124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:56:29.145776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:56:29.157790 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (913) Mar 17 17:56:29.150146 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:56:29.150179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:56:29.154514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:56:29.168717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:56:29.184558 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.184630 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:29.184644 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:29.189330 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:29.189728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:56:29.365268 coreos-metadata[915]: Mar 17 17:56:29.365 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:56:29.370090 coreos-metadata[915]: Mar 17 17:56:29.367 INFO Fetch successful Mar 17 17:56:29.370090 coreos-metadata[915]: Mar 17 17:56:29.367 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:56:29.378852 coreos-metadata[915]: Mar 17 17:56:29.378 INFO Fetch successful Mar 17 17:56:29.382666 coreos-metadata[915]: Mar 17 17:56:29.382 INFO wrote hostname ci-4152.2.2-a-99edcdcd5a to /sysroot/etc/hostname Mar 17 17:56:29.384566 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:56:29.402753 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:56:29.416414 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:56:29.425392 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:56:29.432829 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:56:29.693107 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:56:29.704711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:56:29.709746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:56:29.721706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:56:29.727514 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.748638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:56:29.757410 ignition[1032]: INFO : Ignition 2.20.0 Mar 17 17:56:29.757410 ignition[1032]: INFO : Stage: mount Mar 17 17:56:29.763583 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:29.763583 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:29.763583 ignition[1032]: INFO : mount: mount passed Mar 17 17:56:29.763583 ignition[1032]: INFO : Ignition finished successfully Mar 17 17:56:29.759404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:56:29.775798 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:56:29.788773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:56:29.810594 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1043) Mar 17 17:56:29.814589 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:56:29.814635 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:56:29.818898 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:56:29.824589 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:56:29.825663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:56:29.847189 ignition[1059]: INFO : Ignition 2.20.0 Mar 17 17:56:29.847189 ignition[1059]: INFO : Stage: files Mar 17 17:56:29.851281 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:29.851281 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:29.851281 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:56:29.859449 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:56:29.859449 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:56:29.882195 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:56:29.886071 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:56:29.889853 unknown[1059]: wrote ssh authorized keys file for user: core Mar 17 17:56:29.892516 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:56:29.896456 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:56:29.900726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:29.905061 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:56:30.443724 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Mar 17 17:56:30.481868 systemd-networkd[872]: enP62964s1: Gained IPv6LL Mar 17 17:56:30.545722 systemd-networkd[872]: eth0: Gained IPv6LL Mar 17 17:56:30.736781 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:56:30.736781 ignition[1059]: INFO : files: op(8): [started] processing unit "containerd.service" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: op(8): [finished] processing unit "containerd.service" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:56:30.748203 ignition[1059]: INFO : files: files passed Mar 17 17:56:30.748203 ignition[1059]: INFO : Ignition finished successfully Mar 17 17:56:30.739358 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:56:30.766458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:56:30.774810 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:56:30.778510 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:56:30.782439 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:56:30.801213 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.801213 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.809166 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:56:30.805386 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:56:30.812811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:56:30.829732 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:56:30.854339 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:56:30.854458 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:56:30.860699 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:56:30.866486 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:56:30.869219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:56:30.877997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:56:30.889799 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:56:30.900848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:56:30.912259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:56:30.913358 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:56:30.913772 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:56:30.914159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:56:30.914269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:56:30.915422 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:56:30.916300 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:56:30.916717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:56:30.917135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:56:30.917543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:56:30.917973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:56:30.918386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:56:30.918826 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:56:30.919216 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:56:30.919634 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:56:30.920014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:56:30.920148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:56:30.920906 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:56:30.921343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:56:30.921686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:56:30.958309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:56:30.961704 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:56:30.961877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:56:31.015823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:56:31.016117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:56:31.022192 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:56:31.022351 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:56:31.029640 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:56:31.031457 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:56:31.044830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:56:31.049344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:56:31.049518 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:56:31.068119 ignition[1112]: INFO : Ignition 2.20.0 Mar 17 17:56:31.068119 ignition[1112]: INFO : Stage: umount Mar 17 17:56:31.076904 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:56:31.076904 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:56:31.076904 ignition[1112]: INFO : umount: umount passed Mar 17 17:56:31.076904 ignition[1112]: INFO : Ignition finished successfully Mar 17 17:56:31.070832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:56:31.074767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:56:31.074970 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:56:31.082020 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:56:31.082143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:56:31.091993 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:56:31.092095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:56:31.095873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:56:31.095959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:56:31.104286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:56:31.108142 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:56:31.108207 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:56:31.114922 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:56:31.114971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:56:31.120437 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:56:31.120484 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:56:31.125282 systemd[1]: Stopped target network.target - Network. Mar 17 17:56:31.129579 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:56:31.129657 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:56:31.135695 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:56:31.140524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:56:31.142633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:56:31.145735 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:56:31.148046 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:56:31.148897 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:56:31.148942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:56:31.149231 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:56:31.149263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:56:31.149618 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:56:31.149659 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:56:31.150079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:56:31.150113 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:56:31.150663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:56:31.150930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:56:31.171655 systemd-networkd[872]: eth0: DHCPv6 lease lost Mar 17 17:56:31.172672 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:56:31.172787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:56:31.177858 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:56:31.177962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:56:31.183396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:56:31.183440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:56:31.202445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:56:31.206909 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:56:31.206979 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:56:31.243684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:56:31.243777 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:31.248727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:56:31.251268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:56:31.270690 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:56:31.270775 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:56:31.276627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:56:31.292974 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:56:31.295354 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:56:31.296875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:56:31.296943 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:56:31.298158 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:56:31.298189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:56:31.298548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:56:31.298600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:56:31.299462 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:56:31.299498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:56:31.300742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:56:31.300780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:56:31.303723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:56:31.358053 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: Data path switched from VF: enP62964s1 Mar 17 17:56:31.304131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:56:31.304177 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:56:31.304625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:31.304659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:31.314502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:56:31.314614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:56:31.384720 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:56:31.384848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:56:31.722435 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:56:31.722565 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:56:31.727723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:56:31.734724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:56:31.734793 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:56:31.748755 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:56:31.930093 systemd[1]: Switching root. Mar 17 17:56:31.963473 systemd-journald[177]: Journal stopped Mar 17 17:56:33.931106 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 17 17:56:33.931145 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:56:33.931162 kernel: SELinux: policy capability open_perms=1 Mar 17 17:56:33.931171 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:56:33.931178 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:56:33.931187 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:56:33.931199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:56:33.931228 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:56:33.931239 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:56:33.931247 kernel: audit: type=1403 audit(1742234192.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:56:33.931257 systemd[1]: Successfully loaded SELinux policy in 65.231ms. Mar 17 17:56:33.931267 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.012ms. Mar 17 17:56:33.931284 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:56:33.931307 systemd[1]: Detected virtualization microsoft. Mar 17 17:56:33.931336 systemd[1]: Detected architecture x86-64. Mar 17 17:56:33.931348 systemd[1]: Detected first boot. Mar 17 17:56:33.931358 systemd[1]: Hostname set to . Mar 17 17:56:33.931377 systemd[1]: Initializing machine ID from random generator. Mar 17 17:56:33.931398 zram_generator::config[1171]: No configuration found. Mar 17 17:56:33.931424 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:56:33.931436 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:56:33.931445 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:56:33.931461 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:56:33.931484 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:56:33.931500 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:56:33.931510 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:56:33.931532 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:56:33.931550 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:56:33.931562 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:56:33.931578 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:56:33.931590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:56:33.931615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:56:33.931636 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:56:33.931652 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:56:33.931662 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:56:33.931671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:56:33.931683 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:56:33.931702 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:56:33.931718 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:56:33.931728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:56:33.931747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:56:33.931772 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:56:33.931802 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:56:33.931814 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:56:33.931824 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:56:33.931839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:56:33.931862 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:56:33.931875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:56:33.931885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:56:33.931909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:56:33.931924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:56:33.931936 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:56:33.931958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:56:33.931974 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:56:33.931991 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:56:33.932012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:56:33.932027 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:56:33.932039 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:56:33.932063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:56:33.932081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:56:33.932092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:56:33.932111 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:56:33.932132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:56:33.932142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:56:33.932162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:56:33.932179 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:56:33.932198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:56:33.932222 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:56:33.932244 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:56:33.932266 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:56:33.932293 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:56:33.932316 kernel: loop: module loaded Mar 17 17:56:33.932337 kernel: ACPI: bus type drm_connector registered Mar 17 17:56:33.932358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:56:33.932381 kernel: fuse: init (API version 7.39) Mar 17 17:56:33.932404 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:56:33.932425 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:56:33.932447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:56:33.932471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:56:33.932498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:56:33.932521 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:56:33.932543 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:56:33.932568 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:56:33.932603 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:56:33.932627 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:56:33.932650 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:56:33.932675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:56:33.932705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:56:33.932731 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:56:33.932754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:56:33.932773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:56:33.932795 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:56:33.932848 systemd-journald[1293]: Collecting audit messages is disabled. Mar 17 17:56:33.932898 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:56:33.932921 systemd-journald[1293]: Journal started Mar 17 17:56:33.932960 systemd-journald[1293]: Runtime Journal (/run/log/journal/d9be1a0132214c24aa1c32fe44f3df25) is 8.0M, max 158.8M, 150.8M free. Mar 17 17:56:33.940586 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:56:33.943742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:56:33.943903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:56:33.947274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:56:33.947499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:56:33.950487 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:56:33.950785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:56:33.967735 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:56:33.974715 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:56:33.979914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:56:33.983793 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:56:33.989178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:56:33.992840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:56:33.995930 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:56:34.011253 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:56:34.025249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:56:34.030473 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:56:34.035023 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:56:34.045885 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:56:34.056845 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:56:34.061564 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:56:34.065666 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:56:34.086780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:56:34.091405 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 17 17:56:34.091428 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 17 17:56:34.102709 systemd-journald[1293]: Time spent on flushing to /var/log/journal/d9be1a0132214c24aa1c32fe44f3df25 is 41.900ms for 928 entries. Mar 17 17:56:34.102709 systemd-journald[1293]: System Journal (/var/log/journal/d9be1a0132214c24aa1c32fe44f3df25) is 8.0M, max 2.6G, 2.6G free. Mar 17 17:56:34.169596 systemd-journald[1293]: Received client request to flush runtime journal. Mar 17 17:56:34.118356 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:56:34.134898 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:56:34.145949 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:56:34.149952 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:56:34.165281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:34.171533 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:56:34.184133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:56:34.192730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:56:34.203168 udevadm[1349]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:56:34.440451 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:56:34.450809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:56:34.471439 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Mar 17 17:56:34.471482 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Mar 17 17:56:34.476705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:56:36.317072 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:56:36.326878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:56:36.352792 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Mar 17 17:56:36.627739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:56:36.641799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:56:36.680732 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:56:36.729052 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:56:36.925468 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:56:36.994597 kernel: hv_vmbus: registering driver hv_balloon Mar 17 17:56:37.002638 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:56:37.007594 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 17:56:37.017589 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 17:56:37.026743 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 17:56:37.033145 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 17:56:37.046593 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:56:37.042974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:37.054534 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:56:37.061046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:37.062030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:37.092853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:37.219814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:56:37.220148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:37.230976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:56:37.258598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1378) Mar 17 17:56:37.360440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:56:37.429562 systemd-networkd[1363]: lo: Link UP Mar 17 17:56:37.429637 systemd-networkd[1363]: lo: Gained carrier Mar 17 17:56:37.435397 systemd-networkd[1363]: Enumeration completed Mar 17 17:56:37.435581 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:56:37.439042 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:37.439048 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:56:37.452824 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:56:37.466593 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 17 17:56:37.513598 kernel: mlx5_core f5f4:00:02.0 enP62964s1: Link up Mar 17 17:56:37.531287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:56:37.597608 kernel: hv_netvsc 000d3adf-9277-000d-3adf-9277000d3adf eth0: Data path switched to VF: enP62964s1 Mar 17 17:56:37.598737 systemd-networkd[1363]: enP62964s1: Link UP Mar 17 17:56:37.599090 systemd-networkd[1363]: eth0: Link UP Mar 17 17:56:37.599102 systemd-networkd[1363]: eth0: Gained carrier Mar 17 17:56:37.599128 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:37.603046 systemd-networkd[1363]: enP62964s1: Gained carrier Mar 17 17:56:37.610112 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:56:37.622756 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:56:37.629634 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 17:56:37.866404 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:56:37.896825 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:56:37.900597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:56:37.908768 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:56:37.913828 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:56:37.942859 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:56:37.946707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:56:37.949891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:56:37.949936 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:56:37.952400 systemd[1]: Reached target machines.target - Containers. Mar 17 17:56:37.955902 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:56:37.965737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:56:37.969919 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:56:37.972409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:56:37.975744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:56:37.980782 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:56:37.987737 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:56:38.169965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:56:38.183693 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 17:56:38.188079 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:56:38.673865 systemd-networkd[1363]: enP62964s1: Gained IPv6LL Mar 17 17:56:38.762768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:56:38.793624 kernel: loop1: detected capacity change from 0 to 138184 Mar 17 17:56:38.860717 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:56:38.861819 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:56:39.121932 systemd-networkd[1363]: eth0: Gained IPv6LL Mar 17 17:56:39.130707 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:56:39.670602 kernel: loop2: detected capacity change from 0 to 140992 Mar 17 17:56:40.220604 kernel: loop3: detected capacity change from 0 to 28272 Mar 17 17:56:40.363598 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:56:40.421602 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:56:40.431602 kernel: loop6: detected capacity change from 0 to 140992 Mar 17 17:56:40.442600 kernel: loop7: detected capacity change from 0 to 28272 Mar 17 17:56:40.445833 (sd-merge)[1504]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 17 17:56:40.446428 (sd-merge)[1504]: Merged extensions into '/usr'. Mar 17 17:56:40.450663 systemd[1]: Reloading requested from client PID 1488 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:56:40.450680 systemd[1]: Reloading... Mar 17 17:56:40.521817 zram_generator::config[1532]: No configuration found. Mar 17 17:56:40.696658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:56:40.780988 systemd[1]: Reloading finished in 329 ms. Mar 17 17:56:40.797555 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:56:40.814767 systemd[1]: Starting ensure-sysext.service... Mar 17 17:56:40.822277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:56:40.828872 systemd[1]: Reloading requested from client PID 1595 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:56:40.828895 systemd[1]: Reloading... Mar 17 17:56:40.847989 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:56:40.848491 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:56:40.849771 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:56:40.850216 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 17 17:56:40.850308 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 17 17:56:40.855385 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:56:40.855397 systemd-tmpfiles[1596]: Skipping /boot Mar 17 17:56:40.866485 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:56:40.866657 systemd-tmpfiles[1596]: Skipping /boot Mar 17 17:56:40.903596 zram_generator::config[1621]: No configuration found. Mar 17 17:56:41.214256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:56:41.297735 systemd[1]: Reloading finished in 468 ms. Mar 17 17:56:41.322456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:56:41.333175 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:56:41.339237 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:56:41.344910 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:56:41.350934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:56:41.362910 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:56:41.373266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:56:41.374143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:56:41.380984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:56:41.385870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:56:41.394854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:56:41.414731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:56:41.424005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:56:41.424293 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:56:41.428834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:56:41.431046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:56:41.431387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:56:41.435131 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:56:41.435437 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:56:41.438943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:56:41.439289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:56:41.443354 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:56:41.443717 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:56:41.448689 systemd[1]: Finished ensure-sysext.service. Mar 17 17:56:41.457823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:56:41.457915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:56:41.633246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:56:41.670237 systemd-resolved[1695]: Positive Trust Anchors: Mar 17 17:56:41.670257 systemd-resolved[1695]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:56:41.670309 systemd-resolved[1695]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:56:41.682424 systemd-resolved[1695]: Using system hostname 'ci-4152.2.2-a-99edcdcd5a'. Mar 17 17:56:41.686195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:56:41.690110 systemd[1]: Reached target network.target - Network. Mar 17 17:56:41.692705 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:56:41.695401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:56:41.704768 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:56:41.740806 augenrules[1736]: No rules Mar 17 17:56:41.741446 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:56:41.741862 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:56:42.263991 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:56:42.268706 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:56:44.932963 ldconfig[1485]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:56:44.942544 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:56:44.953760 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:56:44.966857 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:56:44.970459 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:56:44.972750 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:56:44.975669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:56:44.978822 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:56:44.981439 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:56:44.984399 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:56:44.987309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:56:44.987345 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:56:44.989512 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:56:44.992549 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:56:44.997192 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:56:45.001078 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:56:45.009626 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:56:45.012318 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:56:45.014532 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:56:45.016954 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:56:45.017011 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:56:45.017044 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:56:45.024689 systemd[1]: Starting chronyd.service - NTP client/server... Mar 17 17:56:45.029701 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:56:45.037760 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:56:45.050774 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:56:45.074733 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:56:45.081769 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:56:45.084495 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:56:45.084552 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 17 17:56:45.087740 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 17 17:56:45.088180 (chronyd)[1752]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 17 17:56:45.095396 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 17 17:56:45.099235 jq[1760]: false Mar 17 17:56:45.099105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:45.110225 chronyd[1767]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 17 17:56:45.110818 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:56:45.110788 KVP[1762]: KVP starting; pid is:1762 Mar 17 17:56:45.119633 chronyd[1767]: Timezone right/UTC failed leap second check, ignoring Mar 17 17:56:45.119897 chronyd[1767]: Loaded seccomp filter (level 2) Mar 17 17:56:45.123020 KVP[1762]: KVP LIC Version: 3.1 Mar 17 17:56:45.123603 kernel: hv_utils: KVP IC version 4.0 Mar 17 17:56:45.126783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:56:45.132563 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:56:45.142818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:56:45.150189 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:56:45.155933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:56:45.171753 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:56:45.177704 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:56:45.183257 systemd[1]: Started chronyd.service - NTP client/server. Mar 17 17:56:45.196986 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:56:45.197312 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:56:45.202060 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:56:45.202397 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:56:45.210596 jq[1776]: true Mar 17 17:56:45.237486 jq[1782]: true Mar 17 17:56:45.298974 (ntainerd)[1802]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:56:45.306708 systemd-logind[1773]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:56:45.308072 systemd-logind[1773]: New seat seat0. Mar 17 17:56:45.313649 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:56:45.328117 update_engine[1775]: I20250317 17:56:45.328015 1775 main.cc:92] Flatcar Update Engine starting Mar 17 17:56:45.342999 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:56:45.343329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:56:45.364607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:56:45.381649 extend-filesystems[1761]: Found loop4 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found loop5 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found loop6 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found loop7 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda1 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda2 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda3 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found usr Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda4 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda6 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda7 Mar 17 17:56:45.381649 extend-filesystems[1761]: Found sda9 Mar 17 17:56:45.424694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:56:45.446865 bash[1813]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:56:45.447038 extend-filesystems[1761]: Checking size of /dev/sda9 Mar 17 17:56:45.440787 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:56:45.454284 extend-filesystems[1761]: Old size kept for /dev/sda9 Mar 17 17:56:45.454284 extend-filesystems[1761]: Found sr0 Mar 17 17:56:45.450963 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:56:45.451296 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:56:45.502598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1836) Mar 17 17:56:45.762144 dbus-daemon[1755]: [system] SELinux support is enabled Mar 17 17:56:45.762453 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:56:45.773538 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:56:45.773603 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:56:45.777740 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:56:45.777768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:56:45.783432 update_engine[1775]: I20250317 17:56:45.783366 1775 update_check_scheduler.cc:74] Next update check in 11m53s Mar 17 17:56:45.785021 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:56:45.789084 dbus-daemon[1755]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:56:45.789652 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:56:45.797740 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:56:45.935374 coreos-metadata[1754]: Mar 17 17:56:45.935 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:56:45.939092 coreos-metadata[1754]: Mar 17 17:56:45.938 INFO Fetch successful Mar 17 17:56:45.940594 coreos-metadata[1754]: Mar 17 17:56:45.940 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 17:56:45.947086 coreos-metadata[1754]: Mar 17 17:56:45.947 INFO Fetch successful Mar 17 17:56:45.947086 coreos-metadata[1754]: Mar 17 17:56:45.947 INFO Fetching http://168.63.129.16/machine/d365e4ab-9b01-47fc-bbcc-b84796c48ab5/0f0f0937%2D4f9b%2D4ccc%2D8918%2Dc0298da73c89.%5Fci%2D4152.2.2%2Da%2D99edcdcd5a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 17:56:45.949248 coreos-metadata[1754]: Mar 17 17:56:45.948 INFO Fetch successful Mar 17 17:56:45.949352 coreos-metadata[1754]: Mar 17 17:56:45.949 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:56:45.960519 coreos-metadata[1754]: Mar 17 17:56:45.960 INFO Fetch successful Mar 17 17:56:46.012652 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:56:46.018000 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:56:46.439764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:46.444116 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:46.571646 locksmithd[1898]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:56:46.643335 sshd_keygen[1817]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:56:46.669088 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:56:46.682954 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:56:46.690213 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 17 17:56:46.696354 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:56:46.697012 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:56:46.709129 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:56:46.741746 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 17 17:56:46.752983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:56:46.762429 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:56:46.767269 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:56:46.772490 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:56:47.154304 kubelet[1915]: E0317 17:56:47.154126 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:47.156913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:47.157234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:47.699356 containerd[1802]: time="2025-03-17T17:56:47.699257800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:56:47.719654 containerd[1802]: time="2025-03-17T17:56:47.719589300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721271 containerd[1802]: time="2025-03-17T17:56:47.721221500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721271 containerd[1802]: time="2025-03-17T17:56:47.721261100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:56:47.721431 containerd[1802]: time="2025-03-17T17:56:47.721283400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:56:47.721503 containerd[1802]: time="2025-03-17T17:56:47.721478700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:56:47.721545 containerd[1802]: time="2025-03-17T17:56:47.721506600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721626 containerd[1802]: time="2025-03-17T17:56:47.721605800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721676 containerd[1802]: time="2025-03-17T17:56:47.721624800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721897 containerd[1802]: time="2025-03-17T17:56:47.721873300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721897 containerd[1802]: time="2025-03-17T17:56:47.721893000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721981 containerd[1802]: time="2025-03-17T17:56:47.721911300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:56:47.721981 containerd[1802]: time="2025-03-17T17:56:47.721924200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.722055 containerd[1802]: time="2025-03-17T17:56:47.722033800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.722272 containerd[1802]: time="2025-03-17T17:56:47.722247900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:56:47.722431 containerd[1802]: time="2025-03-17T17:56:47.722409700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:56:47.722431 containerd[1802]: time="2025-03-17T17:56:47.722428800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:56:47.722546 containerd[1802]: time="2025-03-17T17:56:47.722528200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:56:47.722610 containerd[1802]: time="2025-03-17T17:56:47.722600100Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:56:48.024005 containerd[1802]: time="2025-03-17T17:56:48.023872300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:56:48.024005 containerd[1802]: time="2025-03-17T17:56:48.023972300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:56:48.024005 containerd[1802]: time="2025-03-17T17:56:48.024003500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:56:48.024228 containerd[1802]: time="2025-03-17T17:56:48.024030100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:56:48.024228 containerd[1802]: time="2025-03-17T17:56:48.024049900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:56:48.024340 containerd[1802]: time="2025-03-17T17:56:48.024279400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:56:48.024811 containerd[1802]: time="2025-03-17T17:56:48.024771800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:56:48.024984 containerd[1802]: time="2025-03-17T17:56:48.024954700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:56:48.025050 containerd[1802]: time="2025-03-17T17:56:48.024983100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:56:48.025050 containerd[1802]: time="2025-03-17T17:56:48.025014500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:56:48.025050 containerd[1802]: time="2025-03-17T17:56:48.025040600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025195 containerd[1802]: time="2025-03-17T17:56:48.025077400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025195 containerd[1802]: time="2025-03-17T17:56:48.025103900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025195 containerd[1802]: time="2025-03-17T17:56:48.025128900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025195 containerd[1802]: time="2025-03-17T17:56:48.025157800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025195 containerd[1802]: time="2025-03-17T17:56:48.025181700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025204300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025226900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025259700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025283200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025306700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025331900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025353600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025376500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025418 containerd[1802]: time="2025-03-17T17:56:48.025400600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025423000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025455000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025483500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025504100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025526100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025547100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025592400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025630300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025664400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025687800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025754400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025784300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:56:48.025851 containerd[1802]: time="2025-03-17T17:56:48.025805500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:56:48.026423 containerd[1802]: time="2025-03-17T17:56:48.025829800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:56:48.026423 containerd[1802]: time="2025-03-17T17:56:48.025847100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.026423 containerd[1802]: time="2025-03-17T17:56:48.025877200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:56:48.026423 containerd[1802]: time="2025-03-17T17:56:48.025897700Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:56:48.026423 containerd[1802]: time="2025-03-17T17:56:48.025916100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.026396100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.026838300Z" level=info msg="Connect containerd service" Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.026920500Z" level=info msg="using legacy CRI server" Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.026936700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.027260100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:56:48.028779 containerd[1802]: time="2025-03-17T17:56:48.028187700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.028823400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.028889100Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.028966100Z" level=info msg="Start subscribing containerd event" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.029020500Z" level=info msg="Start recovering state" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.029106600Z" level=info msg="Start event monitor" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.029121000Z" level=info msg="Start snapshots syncer" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.029134800Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:56:48.029158 containerd[1802]: time="2025-03-17T17:56:48.029150100Z" level=info msg="Start streaming server" Mar 17 17:56:48.029385 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:56:48.033182 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:56:48.036692 containerd[1802]: time="2025-03-17T17:56:48.034829100Z" level=info msg="containerd successfully booted in 0.336421s" Mar 17 17:56:48.037804 systemd[1]: Startup finished in 618ms (firmware) + 8.189s (loader) + 8.133s (kernel) + 15.602s (userspace) = 32.544s. Mar 17 17:56:48.469870 login[1957]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 17:56:48.471529 login[1958]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 17:56:48.485386 systemd-logind[1773]: New session 2 of user core. Mar 17 17:56:48.487338 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:56:48.495915 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:56:48.499551 systemd-logind[1773]: New session 1 of user core. Mar 17 17:56:48.518193 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:56:48.530946 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:56:48.534543 (systemd)[1975]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:56:48.661031 systemd[1975]: Queued start job for default target default.target. Mar 17 17:56:48.661530 systemd[1975]: Created slice app.slice - User Application Slice. Mar 17 17:56:48.661560 systemd[1975]: Reached target paths.target - Paths. Mar 17 17:56:48.661596 systemd[1975]: Reached target timers.target - Timers. Mar 17 17:56:48.672676 systemd[1975]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:56:48.679450 systemd[1975]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:56:48.679524 systemd[1975]: Reached target sockets.target - Sockets. Mar 17 17:56:48.679543 systemd[1975]: Reached target basic.target - Basic System. Mar 17 17:56:48.679607 systemd[1975]: Reached target default.target - Main User Target. Mar 17 17:56:48.679646 systemd[1975]: Startup finished in 137ms. Mar 17 17:56:48.680076 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:56:48.689503 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:56:48.691919 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:56:49.437460 waagent[1954]: 2025-03-17T17:56:49.437349Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 17 17:56:49.440842 waagent[1954]: 2025-03-17T17:56:49.440762Z INFO Daemon Daemon OS: flatcar 4152.2.2 Mar 17 17:56:49.443389 waagent[1954]: 2025-03-17T17:56:49.443326Z INFO Daemon Daemon Python: 3.11.10 Mar 17 17:56:49.445977 waagent[1954]: 2025-03-17T17:56:49.445908Z INFO Daemon Daemon Run daemon Mar 17 17:56:49.448474 waagent[1954]: 2025-03-17T17:56:49.448421Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.2' Mar 17 17:56:49.452770 waagent[1954]: 2025-03-17T17:56:49.452704Z INFO Daemon Daemon Using waagent for provisioning Mar 17 17:56:49.455346 waagent[1954]: 2025-03-17T17:56:49.455296Z INFO Daemon Daemon Activate resource disk Mar 17 17:56:49.457481 waagent[1954]: 2025-03-17T17:56:49.457432Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 17:56:49.465536 waagent[1954]: 2025-03-17T17:56:49.465461Z INFO Daemon Daemon Found device: None Mar 17 17:56:49.467760 waagent[1954]: 2025-03-17T17:56:49.467703Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 17:56:49.471674 waagent[1954]: 2025-03-17T17:56:49.471615Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 17:56:49.477024 waagent[1954]: 2025-03-17T17:56:49.476963Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:56:49.479933 waagent[1954]: 2025-03-17T17:56:49.479878Z INFO Daemon Daemon Running default provisioning handler Mar 17 17:56:49.489328 waagent[1954]: 2025-03-17T17:56:49.489229Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 17 17:56:49.495523 waagent[1954]: 2025-03-17T17:56:49.495464Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 17:56:49.499843 waagent[1954]: 2025-03-17T17:56:49.499786Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 17:56:49.503915 waagent[1954]: 2025-03-17T17:56:49.501202Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 17:56:49.685358 waagent[1954]: 2025-03-17T17:56:49.681849Z INFO Daemon Daemon Successfully mounted dvd Mar 17 17:56:49.697491 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 17:56:49.699501 waagent[1954]: 2025-03-17T17:56:49.699430Z INFO Daemon Daemon Detect protocol endpoint Mar 17 17:56:49.713340 waagent[1954]: 2025-03-17T17:56:49.701383Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:56:49.713340 waagent[1954]: 2025-03-17T17:56:49.702536Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 17:56:49.713340 waagent[1954]: 2025-03-17T17:56:49.703331Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 17:56:49.713340 waagent[1954]: 2025-03-17T17:56:49.704288Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 17:56:49.713340 waagent[1954]: 2025-03-17T17:56:49.704990Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 17:56:49.723126 waagent[1954]: 2025-03-17T17:56:49.723066Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 17:56:49.730599 waagent[1954]: 2025-03-17T17:56:49.724613Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 17:56:49.730599 waagent[1954]: 2025-03-17T17:56:49.725204Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 17:56:49.791656 waagent[1954]: 2025-03-17T17:56:49.791520Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 17:56:49.801334 waagent[1954]: 2025-03-17T17:56:49.793143Z INFO Daemon Daemon Forcing an update of the goal state. Mar 17 17:56:49.801334 waagent[1954]: 2025-03-17T17:56:49.797366Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:56:49.809417 waagent[1954]: 2025-03-17T17:56:49.809354Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 17 17:56:49.825154 waagent[1954]: 2025-03-17T17:56:49.811308Z INFO Daemon Mar 17 17:56:49.825154 waagent[1954]: 2025-03-17T17:56:49.813148Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e14aa51b-c0c1-480d-84b8-bd42f767a573 eTag: 14092857027992822803 source: Fabric] Mar 17 17:56:49.825154 waagent[1954]: 2025-03-17T17:56:49.814622Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 17 17:56:49.825154 waagent[1954]: 2025-03-17T17:56:49.815716Z INFO Daemon Mar 17 17:56:49.825154 waagent[1954]: 2025-03-17T17:56:49.816489Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:56:49.828587 waagent[1954]: 2025-03-17T17:56:49.828529Z INFO Daemon Daemon Downloading artifacts profile blob Mar 17 17:56:49.949099 waagent[1954]: 2025-03-17T17:56:49.948945Z INFO Daemon Downloaded certificate {'thumbprint': 'B76B618FA78F797A89F93C6CAB0D8B88FA251FD5', 'hasPrivateKey': True} Mar 17 17:56:49.955371 waagent[1954]: 2025-03-17T17:56:49.955304Z INFO Daemon Downloaded certificate {'thumbprint': '258F91B82B1636EF5F2F5EDCD838267605208C8F', 'hasPrivateKey': False} Mar 17 17:56:49.960599 waagent[1954]: 2025-03-17T17:56:49.960522Z INFO Daemon Fetch goal state completed Mar 17 17:56:49.994280 waagent[1954]: 2025-03-17T17:56:49.994173Z INFO Daemon Daemon Starting provisioning Mar 17 17:56:49.997230 waagent[1954]: 2025-03-17T17:56:49.997144Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 17:56:50.008268 waagent[1954]: 2025-03-17T17:56:49.998242Z INFO Daemon Daemon Set hostname [ci-4152.2.2-a-99edcdcd5a] Mar 17 17:56:50.008268 waagent[1954]: 2025-03-17T17:56:50.001470Z INFO Daemon Daemon Publish hostname [ci-4152.2.2-a-99edcdcd5a] Mar 17 17:56:50.008268 waagent[1954]: 2025-03-17T17:56:50.003017Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 17:56:50.008268 waagent[1954]: 2025-03-17T17:56:50.003476Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 17:56:50.027269 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:56:50.027281 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:56:50.027331 systemd-networkd[1363]: eth0: DHCP lease lost Mar 17 17:56:50.028810 waagent[1954]: 2025-03-17T17:56:50.028700Z INFO Daemon Daemon Create user account if not exists Mar 17 17:56:50.043142 waagent[1954]: 2025-03-17T17:56:50.030258Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 17:56:50.043142 waagent[1954]: 2025-03-17T17:56:50.031290Z INFO Daemon Daemon Configure sudoer Mar 17 17:56:50.043142 waagent[1954]: 2025-03-17T17:56:50.032557Z INFO Daemon Daemon Configure sshd Mar 17 17:56:50.043142 waagent[1954]: 2025-03-17T17:56:50.033380Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 17 17:56:50.043142 waagent[1954]: 2025-03-17T17:56:50.034181Z INFO Daemon Daemon Deploy ssh public key. Mar 17 17:56:50.043680 systemd-networkd[1363]: eth0: DHCPv6 lease lost Mar 17 17:56:50.083632 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 17:56:57.405100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:56:57.410822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:57.523758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:57.527886 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:58.137285 kubelet[2045]: E0317 17:56:58.137224 2045 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:58.141255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:58.141565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:08.155251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:57:08.160820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:08.259753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:08.263624 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:08.890542 kubelet[2066]: E0317 17:57:08.890485 2066 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:08.893271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:08.893590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:08.925435 chronyd[1767]: Selected source PHC0 Mar 17 17:57:18.905122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:57:18.911093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:19.016755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:19.019838 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:19.626419 kubelet[2086]: E0317 17:57:19.626359 2086 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:19.629148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:19.629453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:20.109365 waagent[1954]: 2025-03-17T17:57:20.109293Z INFO Daemon Daemon Provisioning complete Mar 17 17:57:20.120473 waagent[1954]: 2025-03-17T17:57:20.120407Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 17:57:20.127282 waagent[1954]: 2025-03-17T17:57:20.121659Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 17:57:20.127282 waagent[1954]: 2025-03-17T17:57:20.122456Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 17 17:57:20.276337 waagent[2095]: 2025-03-17T17:57:20.276225Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 17 17:57:20.276809 waagent[2095]: 2025-03-17T17:57:20.276404Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.2 Mar 17 17:57:20.276809 waagent[2095]: 2025-03-17T17:57:20.276493Z INFO ExtHandler ExtHandler Python: 3.11.10 Mar 17 17:57:20.297406 waagent[2095]: 2025-03-17T17:57:20.297332Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 17:57:20.297623 waagent[2095]: 2025-03-17T17:57:20.297557Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:57:20.297726 waagent[2095]: 2025-03-17T17:57:20.297684Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:57:20.305145 waagent[2095]: 2025-03-17T17:57:20.305081Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:57:20.316000 waagent[2095]: 2025-03-17T17:57:20.315947Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 17:57:20.316453 waagent[2095]: 2025-03-17T17:57:20.316400Z INFO ExtHandler Mar 17 17:57:20.316531 waagent[2095]: 2025-03-17T17:57:20.316492Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: edb236c1-438a-4983-9960-a539a9f58bf1 eTag: 14092857027992822803 source: Fabric] Mar 17 17:57:20.316879 waagent[2095]: 2025-03-17T17:57:20.316829Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:57:20.317446 waagent[2095]: 2025-03-17T17:57:20.317383Z INFO ExtHandler Mar 17 17:57:20.317513 waagent[2095]: 2025-03-17T17:57:20.317476Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:57:20.321162 waagent[2095]: 2025-03-17T17:57:20.321121Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:57:20.395058 waagent[2095]: 2025-03-17T17:57:20.394927Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B76B618FA78F797A89F93C6CAB0D8B88FA251FD5', 'hasPrivateKey': True} Mar 17 17:57:20.395437 waagent[2095]: 2025-03-17T17:57:20.395386Z INFO ExtHandler Downloaded certificate {'thumbprint': '258F91B82B1636EF5F2F5EDCD838267605208C8F', 'hasPrivateKey': False} Mar 17 17:57:20.395906 waagent[2095]: 2025-03-17T17:57:20.395856Z INFO ExtHandler Fetch goal state completed Mar 17 17:57:20.408453 waagent[2095]: 2025-03-17T17:57:20.408400Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2095 Mar 17 17:57:20.408619 waagent[2095]: 2025-03-17T17:57:20.408562Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 17 17:57:20.410178 waagent[2095]: 2025-03-17T17:57:20.410123Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.2', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 17:57:20.410547 waagent[2095]: 2025-03-17T17:57:20.410498Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 17:57:20.424054 waagent[2095]: 2025-03-17T17:57:20.424018Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 17:57:20.424231 waagent[2095]: 2025-03-17T17:57:20.424190Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 17:57:20.430887 waagent[2095]: 2025-03-17T17:57:20.430766Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 17:57:20.437471 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit waagent.service)... Mar 17 17:57:20.437488 systemd[1]: Reloading... Mar 17 17:57:20.523675 zram_generator::config[2145]: No configuration found. Mar 17 17:57:20.646140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:57:20.725017 systemd[1]: Reloading finished in 286 ms. Mar 17 17:57:20.752880 waagent[2095]: 2025-03-17T17:57:20.752400Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 17 17:57:20.759372 systemd[1]: Reloading requested from client PID 2206 ('systemctl') (unit waagent.service)... Mar 17 17:57:20.759388 systemd[1]: Reloading... Mar 17 17:57:20.836267 zram_generator::config[2236]: No configuration found. Mar 17 17:57:20.966988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:57:21.045565 systemd[1]: Reloading finished in 285 ms. Mar 17 17:57:21.072610 waagent[2095]: 2025-03-17T17:57:21.072070Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 17 17:57:21.072610 waagent[2095]: 2025-03-17T17:57:21.072294Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 17 17:57:21.202675 waagent[2095]: 2025-03-17T17:57:21.202551Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 17:57:21.203278 waagent[2095]: 2025-03-17T17:57:21.203218Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 17 17:57:21.204065 waagent[2095]: 2025-03-17T17:57:21.204003Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 17:57:21.204458 waagent[2095]: 2025-03-17T17:57:21.204400Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 17:57:21.204604 waagent[2095]: 2025-03-17T17:57:21.204542Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:57:21.204859 waagent[2095]: 2025-03-17T17:57:21.204814Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:57:21.204917 waagent[2095]: 2025-03-17T17:57:21.204878Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:57:21.205003 waagent[2095]: 2025-03-17T17:57:21.204966Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:57:21.205248 waagent[2095]: 2025-03-17T17:57:21.205197Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 17:57:21.205454 waagent[2095]: 2025-03-17T17:57:21.205410Z INFO EnvHandler ExtHandler Configure routes Mar 17 17:57:21.205551 waagent[2095]: 2025-03-17T17:57:21.205506Z INFO EnvHandler ExtHandler Gateway:None Mar 17 17:57:21.205683 waagent[2095]: 2025-03-17T17:57:21.205644Z INFO EnvHandler ExtHandler Routes:None Mar 17 17:57:21.206257 waagent[2095]: 2025-03-17T17:57:21.206205Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 17:57:21.206704 waagent[2095]: 2025-03-17T17:57:21.206661Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 17:57:21.207233 waagent[2095]: 2025-03-17T17:57:21.207123Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 17:57:21.207233 waagent[2095]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 17:57:21.207233 waagent[2095]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 17:57:21.207233 waagent[2095]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 17:57:21.207233 waagent[2095]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:57:21.207233 waagent[2095]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:57:21.207233 waagent[2095]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:57:21.208600 waagent[2095]: 2025-03-17T17:57:21.207629Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 17:57:21.208600 waagent[2095]: 2025-03-17T17:57:21.207758Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 17:57:21.208778 waagent[2095]: 2025-03-17T17:57:21.208729Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 17:57:21.215713 waagent[2095]: 2025-03-17T17:57:21.215672Z INFO ExtHandler ExtHandler Mar 17 17:57:21.216709 waagent[2095]: 2025-03-17T17:57:21.216662Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 11168245-9a09-4870-86c6-000cbe711297 correlation 6dd082ca-000f-450c-b298-31331f4430f6 created: 2025-03-17T17:56:05.064476Z] Mar 17 17:57:21.217292 waagent[2095]: 2025-03-17T17:57:21.217211Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:57:21.218259 waagent[2095]: 2025-03-17T17:57:21.218209Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 17 17:57:21.235766 waagent[2095]: 2025-03-17T17:57:21.235703Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 17:57:21.235766 waagent[2095]: Executing ['ip', '-a', '-o', 'link']: Mar 17 17:57:21.235766 waagent[2095]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 17:57:21.235766 waagent[2095]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:92:77 brd ff:ff:ff:ff:ff:ff Mar 17 17:57:21.235766 waagent[2095]: 3: enP62964s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:df:92:77 brd ff:ff:ff:ff:ff:ff\ altname enP62964p0s2 Mar 17 17:57:21.235766 waagent[2095]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 17:57:21.235766 waagent[2095]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 17:57:21.235766 waagent[2095]: 2: eth0 inet 10.200.8.34/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 17:57:21.235766 waagent[2095]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 17:57:21.235766 waagent[2095]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 17 17:57:21.235766 waagent[2095]: 2: eth0 inet6 fe80::20d:3aff:fedf:9277/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:57:21.235766 waagent[2095]: 3: enP62964s1 inet6 fe80::20d:3aff:fedf:9277/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:57:21.264141 waagent[2095]: 2025-03-17T17:57:21.264098Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F1842460-C364-4377-88AA-1C8A050BB494;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 17 17:57:21.281534 waagent[2095]: 2025-03-17T17:57:21.281467Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 17 17:57:21.281534 waagent[2095]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.281534 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.281534 waagent[2095]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.281534 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.281534 waagent[2095]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.281534 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.281534 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:57:21.281534 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:57:21.281534 waagent[2095]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:57:21.285825 waagent[2095]: 2025-03-17T17:57:21.285779Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 17:57:21.285825 waagent[2095]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.285825 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.285825 waagent[2095]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.285825 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.285825 waagent[2095]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:57:21.285825 waagent[2095]: pkts bytes target prot opt in out source destination Mar 17 17:57:21.285825 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:57:21.285825 waagent[2095]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:57:21.285825 waagent[2095]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:57:21.286211 waagent[2095]: 2025-03-17T17:57:21.286069Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 17:57:25.119732 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 17 17:57:29.655740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:57:29.665794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:29.850746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:29.853521 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:29.892439 kubelet[2347]: E0317 17:57:29.892385 2347 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:29.894935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:29.895267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:31.339057 update_engine[1775]: I20250317 17:57:31.338896 1775 update_attempter.cc:509] Updating boot flags... Mar 17 17:57:31.397622 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2371) Mar 17 17:57:31.523730 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2375) Mar 17 17:57:39.905141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:57:39.910781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:40.159753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:40.163520 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:40.201931 kubelet[2482]: E0317 17:57:40.201872 2482 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:40.204615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:40.204942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:42.394768 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:57:42.402852 systemd[1]: Started sshd@0-10.200.8.34:22-10.200.16.10:40598.service - OpenSSH per-connection server daemon (10.200.16.10:40598). Mar 17 17:57:43.063241 sshd[2491]: Accepted publickey for core from 10.200.16.10 port 40598 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:43.065045 sshd-session[2491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:43.070928 systemd-logind[1773]: New session 3 of user core. Mar 17 17:57:43.079823 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:57:43.608882 systemd[1]: Started sshd@1-10.200.8.34:22-10.200.16.10:40606.service - OpenSSH per-connection server daemon (10.200.16.10:40606). Mar 17 17:57:44.239857 sshd[2496]: Accepted publickey for core from 10.200.16.10 port 40606 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:44.241538 sshd-session[2496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:44.247166 systemd-logind[1773]: New session 4 of user core. Mar 17 17:57:44.262810 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:57:44.684226 sshd[2499]: Connection closed by 10.200.16.10 port 40606 Mar 17 17:57:44.685134 sshd-session[2496]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:44.689688 systemd[1]: sshd@1-10.200.8.34:22-10.200.16.10:40606.service: Deactivated successfully. Mar 17 17:57:44.694012 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:57:44.694712 systemd-logind[1773]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:57:44.695638 systemd-logind[1773]: Removed session 4. Mar 17 17:57:44.800909 systemd[1]: Started sshd@2-10.200.8.34:22-10.200.16.10:40608.service - OpenSSH per-connection server daemon (10.200.16.10:40608). Mar 17 17:57:45.428000 sshd[2504]: Accepted publickey for core from 10.200.16.10 port 40608 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:45.429746 sshd-session[2504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:45.435382 systemd-logind[1773]: New session 5 of user core. Mar 17 17:57:45.441805 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:57:45.867997 sshd[2507]: Connection closed by 10.200.16.10 port 40608 Mar 17 17:57:45.869303 sshd-session[2504]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:45.872802 systemd[1]: sshd@2-10.200.8.34:22-10.200.16.10:40608.service: Deactivated successfully. Mar 17 17:57:45.878021 systemd-logind[1773]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:57:45.878937 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:57:45.880100 systemd-logind[1773]: Removed session 5. Mar 17 17:57:45.974925 systemd[1]: Started sshd@3-10.200.8.34:22-10.200.16.10:40610.service - OpenSSH per-connection server daemon (10.200.16.10:40610). Mar 17 17:57:46.613049 sshd[2512]: Accepted publickey for core from 10.200.16.10 port 40610 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:46.614772 sshd-session[2512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:46.619473 systemd-logind[1773]: New session 6 of user core. Mar 17 17:57:46.625830 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:57:47.060593 sshd[2515]: Connection closed by 10.200.16.10 port 40610 Mar 17 17:57:47.061454 sshd-session[2512]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:47.066218 systemd[1]: sshd@3-10.200.8.34:22-10.200.16.10:40610.service: Deactivated successfully. Mar 17 17:57:47.070912 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:57:47.071667 systemd-logind[1773]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:57:47.072536 systemd-logind[1773]: Removed session 6. Mar 17 17:57:47.167891 systemd[1]: Started sshd@4-10.200.8.34:22-10.200.16.10:40616.service - OpenSSH per-connection server daemon (10.200.16.10:40616). Mar 17 17:57:47.798047 sshd[2520]: Accepted publickey for core from 10.200.16.10 port 40616 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:47.799772 sshd-session[2520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:47.804674 systemd-logind[1773]: New session 7 of user core. Mar 17 17:57:47.811003 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:57:48.203934 sudo[2524]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:57:48.204317 sudo[2524]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:48.223091 sudo[2524]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:48.324507 sshd[2523]: Connection closed by 10.200.16.10 port 40616 Mar 17 17:57:48.325844 sshd-session[2520]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:48.329316 systemd[1]: sshd@4-10.200.8.34:22-10.200.16.10:40616.service: Deactivated successfully. Mar 17 17:57:48.333756 systemd-logind[1773]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:57:48.334198 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:57:48.335767 systemd-logind[1773]: Removed session 7. Mar 17 17:57:48.433153 systemd[1]: Started sshd@5-10.200.8.34:22-10.200.16.10:60096.service - OpenSSH per-connection server daemon (10.200.16.10:60096). Mar 17 17:57:49.061197 sshd[2529]: Accepted publickey for core from 10.200.16.10 port 60096 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:49.062988 sshd-session[2529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:49.067687 systemd-logind[1773]: New session 8 of user core. Mar 17 17:57:49.078806 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:57:49.408045 sudo[2534]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:57:49.408406 sudo[2534]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:49.411747 sudo[2534]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:49.416628 sudo[2533]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:57:49.416975 sudo[2533]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:49.429932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:57:49.456834 augenrules[2556]: No rules Mar 17 17:57:49.458456 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:57:49.458951 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:57:49.461010 sudo[2533]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:49.562383 sshd[2532]: Connection closed by 10.200.16.10 port 60096 Mar 17 17:57:49.563308 sshd-session[2529]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:49.567765 systemd[1]: sshd@5-10.200.8.34:22-10.200.16.10:60096.service: Deactivated successfully. Mar 17 17:57:49.571437 systemd-logind[1773]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:57:49.571952 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:57:49.573137 systemd-logind[1773]: Removed session 8. Mar 17 17:57:49.696078 systemd[1]: Started sshd@6-10.200.8.34:22-10.200.16.10:60104.service - OpenSSH per-connection server daemon (10.200.16.10:60104). Mar 17 17:57:50.214312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:57:50.221056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:50.327899 sshd[2565]: Accepted publickey for core from 10.200.16.10 port 60104 ssh2: RSA SHA256:AdkiPYMhDImgcuRsvahG0Sz5MmyG/ISnnLmOMRLvYf0 Mar 17 17:57:50.329371 sshd-session[2565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:50.333630 systemd-logind[1773]: New session 9 of user core. Mar 17 17:57:50.343093 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:57:50.412752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:50.416785 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:50.672591 sudo[2587]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:57:50.672962 sudo[2587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:50.882703 kubelet[2581]: E0317 17:57:50.882541 2581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:50.884691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:50.885325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:51.552043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:51.561368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:51.589428 systemd[1]: Reloading requested from client PID 2628 ('systemctl') (unit session-9.scope)... Mar 17 17:57:51.589448 systemd[1]: Reloading... Mar 17 17:57:51.706603 zram_generator::config[2664]: No configuration found. Mar 17 17:57:51.844797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:57:51.922420 systemd[1]: Reloading finished in 332 ms. Mar 17 17:57:51.975792 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:57:51.975891 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:57:51.976268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:51.992239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:52.231820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:52.236687 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:57:52.274370 kubelet[2750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:52.274370 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:57:52.274370 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:52.274879 kubelet[2750]: I0317 17:57:52.274445 2750 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:57:53.014614 kubelet[2750]: I0317 17:57:53.014556 2750 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:57:53.014614 kubelet[2750]: I0317 17:57:53.014603 2750 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:57:53.014927 kubelet[2750]: I0317 17:57:53.014903 2750 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:57:53.036478 kubelet[2750]: I0317 17:57:53.035959 2750 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:57:53.047022 kubelet[2750]: I0317 17:57:53.046994 2750 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:57:53.047472 kubelet[2750]: I0317 17:57:53.047430 2750 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:57:53.047685 kubelet[2750]: I0317 17:57:53.047467 2750 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:57:53.048232 kubelet[2750]: I0317 17:57:53.048208 2750 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:57:53.048293 kubelet[2750]: I0317 17:57:53.048237 2750 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:57:53.048401 kubelet[2750]: I0317 17:57:53.048382 2750 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:53.049081 kubelet[2750]: I0317 17:57:53.049062 2750 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:57:53.049162 kubelet[2750]: I0317 17:57:53.049085 2750 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:57:53.049162 kubelet[2750]: I0317 17:57:53.049112 2750 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:57:53.049162 kubelet[2750]: I0317 17:57:53.049130 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:57:53.049620 kubelet[2750]: E0317 17:57:53.049565 2750 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:53.049957 kubelet[2750]: E0317 17:57:53.049902 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:53.053057 kubelet[2750]: I0317 17:57:53.053002 2750 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:57:53.055188 kubelet[2750]: I0317 17:57:53.054752 2750 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:57:53.055188 kubelet[2750]: W0317 17:57:53.054822 2750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:57:53.055530 kubelet[2750]: I0317 17:57:53.055504 2750 server.go:1264] "Started kubelet" Mar 17 17:57:53.055771 kubelet[2750]: W0317 17:57:53.055752 2750 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.34" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:57:53.055880 kubelet[2750]: E0317 17:57:53.055868 2750 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.34" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:57:53.056029 kubelet[2750]: W0317 17:57:53.056007 2750 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:57:53.056136 kubelet[2750]: E0317 17:57:53.056034 2750 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:57:53.056136 kubelet[2750]: I0317 17:57:53.056065 2750 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:57:53.060590 kubelet[2750]: I0317 17:57:53.059715 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:57:53.062422 kubelet[2750]: I0317 17:57:53.061896 2750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:57:53.062422 kubelet[2750]: I0317 17:57:53.062242 2750 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:57:53.066968 kubelet[2750]: I0317 17:57:53.066950 2750 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:57:53.070484 kubelet[2750]: E0317 17:57:53.068588 2750 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.34.182da8d730fb8ee9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.34,UID:10.200.8.34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.34,},FirstTimestamp:2025-03-17 17:57:53.055477481 +0000 UTC m=+0.814695510,LastTimestamp:2025-03-17 17:57:53.055477481 +0000 UTC m=+0.814695510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.34,}" Mar 17 17:57:53.070484 kubelet[2750]: I0317 17:57:53.069993 2750 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:57:53.070925 kubelet[2750]: I0317 17:57:53.070901 2750 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:57:53.071038 kubelet[2750]: I0317 17:57:53.071015 2750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:57:53.071540 kubelet[2750]: E0317 17:57:53.071512 2750 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:57:53.073564 kubelet[2750]: I0317 17:57:53.073539 2750 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:57:53.073773 kubelet[2750]: I0317 17:57:53.073757 2750 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:57:53.074360 kubelet[2750]: I0317 17:57:53.074339 2750 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:57:53.095471 kubelet[2750]: I0317 17:57:53.095434 2750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:57:53.096981 kubelet[2750]: I0317 17:57:53.096959 2750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:57:53.097106 kubelet[2750]: I0317 17:57:53.097095 2750 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:57:53.097184 kubelet[2750]: I0317 17:57:53.097176 2750 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:57:53.097295 kubelet[2750]: E0317 17:57:53.097275 2750 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:57:53.109771 kubelet[2750]: E0317 17:57:53.109739 2750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.34\" not found" node="10.200.8.34" Mar 17 17:57:53.111093 kubelet[2750]: I0317 17:57:53.111076 2750 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:57:53.111190 kubelet[2750]: I0317 17:57:53.111183 2750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:57:53.111242 kubelet[2750]: I0317 17:57:53.111237 2750 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:53.117348 kubelet[2750]: I0317 17:57:53.117304 2750 policy_none.go:49] "None policy: Start" Mar 17 17:57:53.118253 kubelet[2750]: I0317 17:57:53.118238 2750 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:57:53.118342 kubelet[2750]: I0317 17:57:53.118289 2750 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:57:53.127589 kubelet[2750]: I0317 17:57:53.126871 2750 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:57:53.127589 kubelet[2750]: I0317 17:57:53.127080 2750 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:57:53.127589 kubelet[2750]: I0317 17:57:53.127201 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:57:53.129363 kubelet[2750]: E0317 17:57:53.129349 2750 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.34\" not found" Mar 17 17:57:53.171809 kubelet[2750]: I0317 17:57:53.171780 2750 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.34" Mar 17 17:57:53.181052 kubelet[2750]: I0317 17:57:53.181027 2750 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.34" Mar 17 17:57:53.195262 kubelet[2750]: E0317 17:57:53.195235 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.260985 sudo[2587]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:53.296432 kubelet[2750]: E0317 17:57:53.296289 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.361300 sshd[2572]: Connection closed by 10.200.16.10 port 60104 Mar 17 17:57:53.362150 sshd-session[2565]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:53.368807 systemd[1]: sshd@6-10.200.8.34:22-10.200.16.10:60104.service: Deactivated successfully. Mar 17 17:57:53.372036 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:57:53.373036 systemd-logind[1773]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:57:53.374136 systemd-logind[1773]: Removed session 9. Mar 17 17:57:53.396659 kubelet[2750]: E0317 17:57:53.396626 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.497183 kubelet[2750]: E0317 17:57:53.497129 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.598379 kubelet[2750]: E0317 17:57:53.598215 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.699090 kubelet[2750]: E0317 17:57:53.699036 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.799857 kubelet[2750]: E0317 17:57:53.799800 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:53.900817 kubelet[2750]: E0317 17:57:53.900648 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:54.001422 kubelet[2750]: E0317 17:57:54.001364 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:54.016868 kubelet[2750]: I0317 17:57:54.016740 2750 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:57:54.017351 kubelet[2750]: W0317 17:57:54.017167 2750 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:57:54.017351 kubelet[2750]: W0317 17:57:54.017167 2750 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:57:54.050165 kubelet[2750]: E0317 17:57:54.050092 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:54.101730 kubelet[2750]: E0317 17:57:54.101625 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:54.202761 kubelet[2750]: E0317 17:57:54.202704 2750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.34\" not found" Mar 17 17:57:54.303917 kubelet[2750]: I0317 17:57:54.303875 2750 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:57:54.304527 containerd[1802]: time="2025-03-17T17:57:54.304322482Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:57:54.305016 kubelet[2750]: I0317 17:57:54.304775 2750 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:57:55.051179 kubelet[2750]: I0317 17:57:55.051120 2750 apiserver.go:52] "Watching apiserver" Mar 17 17:57:55.051423 kubelet[2750]: E0317 17:57:55.051131 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:55.055805 kubelet[2750]: I0317 17:57:55.055729 2750 topology_manager.go:215] "Topology Admit Handler" podUID="a943f23c-759b-4919-8091-067e8ba38e73" podNamespace="calico-system" podName="csi-node-driver-l6kcj" Mar 17 17:57:55.055955 kubelet[2750]: I0317 17:57:55.055892 2750 topology_manager.go:215] "Topology Admit Handler" podUID="13503855-a36f-43ac-993d-e2917cffc4b2" podNamespace="kube-system" podName="kube-proxy-vv8bj" Mar 17 17:57:55.056016 kubelet[2750]: I0317 17:57:55.056002 2750 topology_manager.go:215] "Topology Admit Handler" podUID="341c9113-500f-4998-9d1f-be4b9e0f5fff" podNamespace="calico-system" podName="calico-node-gbq5z" Mar 17 17:57:55.057506 kubelet[2750]: E0317 17:57:55.056229 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:57:55.075558 kubelet[2750]: I0317 17:57:55.074110 2750 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:57:55.083416 kubelet[2750]: I0317 17:57:55.083386 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13503855-a36f-43ac-993d-e2917cffc4b2-lib-modules\") pod \"kube-proxy-vv8bj\" (UID: \"13503855-a36f-43ac-993d-e2917cffc4b2\") " pod="kube-system/kube-proxy-vv8bj" Mar 17 17:57:55.083540 kubelet[2750]: I0317 17:57:55.083422 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/341c9113-500f-4998-9d1f-be4b9e0f5fff-tigera-ca-bundle\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083540 kubelet[2750]: I0317 17:57:55.083446 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-cni-bin-dir\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083540 kubelet[2750]: I0317 17:57:55.083466 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-cni-log-dir\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083540 kubelet[2750]: I0317 17:57:55.083488 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a943f23c-759b-4919-8091-067e8ba38e73-registration-dir\") pod \"csi-node-driver-l6kcj\" (UID: \"a943f23c-759b-4919-8091-067e8ba38e73\") " pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:57:55.083540 kubelet[2750]: I0317 17:57:55.083511 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13503855-a36f-43ac-993d-e2917cffc4b2-xtables-lock\") pod \"kube-proxy-vv8bj\" (UID: \"13503855-a36f-43ac-993d-e2917cffc4b2\") " pod="kube-system/kube-proxy-vv8bj" Mar 17 17:57:55.083763 kubelet[2750]: I0317 17:57:55.083531 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-var-lib-calico\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083763 kubelet[2750]: I0317 17:57:55.083552 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-cni-net-dir\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083763 kubelet[2750]: I0317 17:57:55.083589 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a943f23c-759b-4919-8091-067e8ba38e73-kubelet-dir\") pod \"csi-node-driver-l6kcj\" (UID: \"a943f23c-759b-4919-8091-067e8ba38e73\") " pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:57:55.083763 kubelet[2750]: I0317 17:57:55.083615 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96jh\" (UniqueName: \"kubernetes.io/projected/13503855-a36f-43ac-993d-e2917cffc4b2-kube-api-access-p96jh\") pod \"kube-proxy-vv8bj\" (UID: \"13503855-a36f-43ac-993d-e2917cffc4b2\") " pod="kube-system/kube-proxy-vv8bj" Mar 17 17:57:55.083763 kubelet[2750]: I0317 17:57:55.083637 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-policysync\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083951 kubelet[2750]: I0317 17:57:55.083661 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2d6f\" (UniqueName: \"kubernetes.io/projected/341c9113-500f-4998-9d1f-be4b9e0f5fff-kube-api-access-m2d6f\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083951 kubelet[2750]: I0317 17:57:55.083684 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/341c9113-500f-4998-9d1f-be4b9e0f5fff-node-certs\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083951 kubelet[2750]: I0317 17:57:55.083705 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-var-run-calico\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083951 kubelet[2750]: I0317 17:57:55.083725 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-flexvol-driver-host\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.083951 kubelet[2750]: I0317 17:57:55.083750 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a943f23c-759b-4919-8091-067e8ba38e73-varrun\") pod \"csi-node-driver-l6kcj\" (UID: \"a943f23c-759b-4919-8091-067e8ba38e73\") " pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:57:55.084114 kubelet[2750]: I0317 17:57:55.083771 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a943f23c-759b-4919-8091-067e8ba38e73-socket-dir\") pod \"csi-node-driver-l6kcj\" (UID: \"a943f23c-759b-4919-8091-067e8ba38e73\") " pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:57:55.084114 kubelet[2750]: I0317 17:57:55.083796 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxhx\" (UniqueName: \"kubernetes.io/projected/a943f23c-759b-4919-8091-067e8ba38e73-kube-api-access-lmxhx\") pod \"csi-node-driver-l6kcj\" (UID: \"a943f23c-759b-4919-8091-067e8ba38e73\") " pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:57:55.084114 kubelet[2750]: I0317 17:57:55.083819 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-lib-modules\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.084114 kubelet[2750]: I0317 17:57:55.083842 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/341c9113-500f-4998-9d1f-be4b9e0f5fff-xtables-lock\") pod \"calico-node-gbq5z\" (UID: \"341c9113-500f-4998-9d1f-be4b9e0f5fff\") " pod="calico-system/calico-node-gbq5z" Mar 17 17:57:55.084114 kubelet[2750]: I0317 17:57:55.083864 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13503855-a36f-43ac-993d-e2917cffc4b2-kube-proxy\") pod \"kube-proxy-vv8bj\" (UID: \"13503855-a36f-43ac-993d-e2917cffc4b2\") " pod="kube-system/kube-proxy-vv8bj" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.186866 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.187890 kubelet[2750]: W0317 17:57:55.186900 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.186938 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.187196 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.187890 kubelet[2750]: W0317 17:57:55.187212 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.187228 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.187436 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.187890 kubelet[2750]: W0317 17:57:55.187449 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.187463 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.187890 kubelet[2750]: E0317 17:57:55.187800 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.188548 kubelet[2750]: W0317 17:57:55.187814 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.188548 kubelet[2750]: E0317 17:57:55.187831 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.199777 kubelet[2750]: E0317 17:57:55.199750 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.206851 kubelet[2750]: W0317 17:57:55.206825 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.206967 kubelet[2750]: E0317 17:57:55.206857 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.207201 kubelet[2750]: E0317 17:57:55.207189 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.207331 kubelet[2750]: W0317 17:57:55.207274 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.207331 kubelet[2750]: E0317 17:57:55.207293 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.210098 kubelet[2750]: E0317 17:57:55.209837 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.210098 kubelet[2750]: W0317 17:57:55.209852 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.210098 kubelet[2750]: E0317 17:57:55.209868 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.213192 kubelet[2750]: E0317 17:57:55.213172 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:57:55.213192 kubelet[2750]: W0317 17:57:55.213191 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:57:55.213330 kubelet[2750]: E0317 17:57:55.213205 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:57:55.361981 containerd[1802]: time="2025-03-17T17:57:55.361827704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbq5z,Uid:341c9113-500f-4998-9d1f-be4b9e0f5fff,Namespace:calico-system,Attempt:0,}" Mar 17 17:57:55.364369 containerd[1802]: time="2025-03-17T17:57:55.363558336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vv8bj,Uid:13503855-a36f-43ac-993d-e2917cffc4b2,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:55.877529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126393693.mount: Deactivated successfully. Mar 17 17:57:55.902193 containerd[1802]: time="2025-03-17T17:57:55.902138126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:55.906759 containerd[1802]: time="2025-03-17T17:57:55.906699709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 17 17:57:55.910126 containerd[1802]: time="2025-03-17T17:57:55.910086070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:55.913956 containerd[1802]: time="2025-03-17T17:57:55.913920340Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:55.915836 containerd[1802]: time="2025-03-17T17:57:55.915785174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:57:55.918616 containerd[1802]: time="2025-03-17T17:57:55.918553824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:55.919668 containerd[1802]: time="2025-03-17T17:57:55.919361539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.561999ms" Mar 17 17:57:55.925588 containerd[1802]: time="2025-03-17T17:57:55.924771737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.80593ms" Mar 17 17:57:56.051999 kubelet[2750]: E0317 17:57:56.051930 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:56.150070 containerd[1802]: time="2025-03-17T17:57:56.147909093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:56.150070 containerd[1802]: time="2025-03-17T17:57:56.147972795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:56.150070 containerd[1802]: time="2025-03-17T17:57:56.147987095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.150070 containerd[1802]: time="2025-03-17T17:57:56.148078397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.150363 containerd[1802]: time="2025-03-17T17:57:56.149731927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:56.150363 containerd[1802]: time="2025-03-17T17:57:56.149786428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:56.150363 containerd[1802]: time="2025-03-17T17:57:56.149807628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.150949 containerd[1802]: time="2025-03-17T17:57:56.150896848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.299649 systemd[1]: run-containerd-runc-k8s.io-c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564-runc.eH4GRu.mount: Deactivated successfully. Mar 17 17:57:56.336519 containerd[1802]: time="2025-03-17T17:57:56.336465421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbq5z,Uid:341c9113-500f-4998-9d1f-be4b9e0f5fff,Namespace:calico-system,Attempt:0,} returns sandbox id \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\"" Mar 17 17:57:56.339208 containerd[1802]: time="2025-03-17T17:57:56.339171770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vv8bj,Uid:13503855-a36f-43ac-993d-e2917cffc4b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7170571dfeeefa6b9f20f4c0c3f5576c360cca7ec5739a5c2e5b348047150905\"" Mar 17 17:57:56.339794 containerd[1802]: time="2025-03-17T17:57:56.339774181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:57:57.052591 kubelet[2750]: E0317 17:57:57.052526 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:57.098439 kubelet[2750]: E0317 17:57:57.097897 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:57:57.620647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063990876.mount: Deactivated successfully. Mar 17 17:57:57.753096 containerd[1802]: time="2025-03-17T17:57:57.753037860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:57.754984 containerd[1802]: time="2025-03-17T17:57:57.754942395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 17:57:57.757544 containerd[1802]: time="2025-03-17T17:57:57.757376641Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:57.761985 containerd[1802]: time="2025-03-17T17:57:57.761936226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:57.762992 containerd[1802]: time="2025-03-17T17:57:57.762486336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.422564552s" Mar 17 17:57:57.762992 containerd[1802]: time="2025-03-17T17:57:57.762525737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:57:57.763855 containerd[1802]: time="2025-03-17T17:57:57.763748860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:57:57.765262 containerd[1802]: time="2025-03-17T17:57:57.765235488Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:57:57.798034 containerd[1802]: time="2025-03-17T17:57:57.797989900Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70\"" Mar 17 17:57:57.798721 containerd[1802]: time="2025-03-17T17:57:57.798561411Z" level=info msg="StartContainer for \"bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70\"" Mar 17 17:57:57.859742 containerd[1802]: time="2025-03-17T17:57:57.859695854Z" level=info msg="StartContainer for \"bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70\" returns successfully" Mar 17 17:57:58.064614 kubelet[2750]: E0317 17:57:58.053154 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:58.086283 containerd[1802]: time="2025-03-17T17:57:58.086214389Z" level=info msg="shim disconnected" id=bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70 namespace=k8s.io Mar 17 17:57:58.086283 containerd[1802]: time="2025-03-17T17:57:58.086273890Z" level=warning msg="cleaning up after shim disconnected" id=bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70 namespace=k8s.io Mar 17 17:57:58.086283 containerd[1802]: time="2025-03-17T17:57:58.086285491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:58.585960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb4d27f06c1b31174e277d90838c17043da5ddf50c9a9f9ce46b5d270a7f9e70-rootfs.mount: Deactivated successfully. Mar 17 17:57:59.053489 kubelet[2750]: E0317 17:57:59.053435 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:59.085007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062260471.mount: Deactivated successfully. Mar 17 17:57:59.098493 kubelet[2750]: E0317 17:57:59.098008 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:57:59.557710 containerd[1802]: time="2025-03-17T17:57:59.557662102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:59.563105 containerd[1802]: time="2025-03-17T17:57:59.562963301Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 17 17:57:59.567163 containerd[1802]: time="2025-03-17T17:57:59.567100879Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:59.572464 containerd[1802]: time="2025-03-17T17:57:59.572393777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:59.573460 containerd[1802]: time="2025-03-17T17:57:59.573286194Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.809489333s" Mar 17 17:57:59.573460 containerd[1802]: time="2025-03-17T17:57:59.573330395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:57:59.575166 containerd[1802]: time="2025-03-17T17:57:59.574917825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:57:59.575822 containerd[1802]: time="2025-03-17T17:57:59.575797241Z" level=info msg="CreateContainer within sandbox \"7170571dfeeefa6b9f20f4c0c3f5576c360cca7ec5739a5c2e5b348047150905\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:57:59.621236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768468152.mount: Deactivated successfully. Mar 17 17:57:59.632835 containerd[1802]: time="2025-03-17T17:57:59.632797807Z" level=info msg="CreateContainer within sandbox \"7170571dfeeefa6b9f20f4c0c3f5576c360cca7ec5739a5c2e5b348047150905\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f3c302f142fb844f02a9d405db47b77adb88e8153a7e9e4bfa33054c0577a3e\"" Mar 17 17:57:59.633210 containerd[1802]: time="2025-03-17T17:57:59.633186414Z" level=info msg="StartContainer for \"0f3c302f142fb844f02a9d405db47b77adb88e8153a7e9e4bfa33054c0577a3e\"" Mar 17 17:57:59.692027 containerd[1802]: time="2025-03-17T17:57:59.691964913Z" level=info msg="StartContainer for \"0f3c302f142fb844f02a9d405db47b77adb88e8153a7e9e4bfa33054c0577a3e\" returns successfully" Mar 17 17:58:00.054198 kubelet[2750]: E0317 17:58:00.054162 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:00.140078 kubelet[2750]: I0317 17:58:00.140000 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vv8bj" podStartSLOduration=3.906335274 podStartE2EDuration="7.13998409s" podCreationTimestamp="2025-03-17 17:57:53 +0000 UTC" firstStartedPulling="2025-03-17 17:57:56.340744399 +0000 UTC m=+4.099962428" lastFinishedPulling="2025-03-17 17:57:59.574393215 +0000 UTC m=+7.333611244" observedRunningTime="2025-03-17 17:58:00.139858988 +0000 UTC m=+7.899077117" watchObservedRunningTime="2025-03-17 17:58:00.13998409 +0000 UTC m=+7.899202119" Mar 17 17:58:01.054643 kubelet[2750]: E0317 17:58:01.054595 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:01.098691 kubelet[2750]: E0317 17:58:01.098132 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:02.055811 kubelet[2750]: E0317 17:58:02.055743 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:03.056723 kubelet[2750]: E0317 17:58:03.056676 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:03.099006 kubelet[2750]: E0317 17:58:03.098944 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:04.057944 kubelet[2750]: E0317 17:58:04.057824 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:04.248622 containerd[1802]: time="2025-03-17T17:58:04.248556111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:04.250873 containerd[1802]: time="2025-03-17T17:58:04.250822353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:58:04.254409 containerd[1802]: time="2025-03-17T17:58:04.254360719Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:04.258627 containerd[1802]: time="2025-03-17T17:58:04.258596399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:04.259940 containerd[1802]: time="2025-03-17T17:58:04.259290112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.684340086s" Mar 17 17:58:04.259940 containerd[1802]: time="2025-03-17T17:58:04.259327012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:58:04.261885 containerd[1802]: time="2025-03-17T17:58:04.261851459Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:58:04.295117 containerd[1802]: time="2025-03-17T17:58:04.295070881Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c\"" Mar 17 17:58:04.295768 containerd[1802]: time="2025-03-17T17:58:04.295633791Z" level=info msg="StartContainer for \"d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c\"" Mar 17 17:58:04.359342 containerd[1802]: time="2025-03-17T17:58:04.359157379Z" level=info msg="StartContainer for \"d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c\" returns successfully" Mar 17 17:58:05.058177 kubelet[2750]: E0317 17:58:05.058120 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:05.099162 kubelet[2750]: E0317 17:58:05.098623 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:05.755598 kubelet[2750]: I0317 17:58:05.754315 2750 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:58:05.767879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c-rootfs.mount: Deactivated successfully. Mar 17 17:58:06.058736 kubelet[2750]: E0317 17:58:06.058555 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:07.059456 kubelet[2750]: E0317 17:58:07.059393 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:07.102247 containerd[1802]: time="2025-03-17T17:58:07.101722218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:0,}" Mar 17 17:58:08.111729 kubelet[2750]: E0317 17:58:08.060189 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:08.166087 containerd[1802]: time="2025-03-17T17:58:08.166005723Z" level=info msg="shim disconnected" id=d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c namespace=k8s.io Mar 17 17:58:08.166087 containerd[1802]: time="2025-03-17T17:58:08.166079825Z" level=warning msg="cleaning up after shim disconnected" id=d2d3c1d96c6b1ca13344439441cb28692bbf8240df9b15d68b8b83adf72acc5c namespace=k8s.io Mar 17 17:58:08.166087 containerd[1802]: time="2025-03-17T17:58:08.166092625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:58:08.233209 containerd[1802]: time="2025-03-17T17:58:08.233156019Z" level=error msg="Failed to destroy network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.235680 containerd[1802]: time="2025-03-17T17:58:08.233520527Z" level=error msg="encountered an error cleaning up failed sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.235680 containerd[1802]: time="2025-03-17T17:58:08.233613529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.235818 kubelet[2750]: E0317 17:58:08.233903 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.235818 kubelet[2750]: E0317 17:58:08.234000 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:08.235818 kubelet[2750]: E0317 17:58:08.234030 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:08.236053 kubelet[2750]: E0317 17:58:08.234083 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:08.236547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b-shm.mount: Deactivated successfully. Mar 17 17:58:08.328629 kubelet[2750]: I0317 17:58:08.328555 2750 topology_manager.go:215] "Topology Admit Handler" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" podNamespace="default" podName="nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:08.377058 kubelet[2750]: I0317 17:58:08.376880 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xg8\" (UniqueName: \"kubernetes.io/projected/7ad6f385-e692-43a7-9885-d0ad267f32c1-kube-api-access-95xg8\") pod \"nginx-deployment-85f456d6dd-sdm9p\" (UID: \"7ad6f385-e692-43a7-9885-d0ad267f32c1\") " pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:08.633765 containerd[1802]: time="2025-03-17T17:58:08.633619839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:0,}" Mar 17 17:58:08.710977 containerd[1802]: time="2025-03-17T17:58:08.710920261Z" level=error msg="Failed to destroy network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.711293 containerd[1802]: time="2025-03-17T17:58:08.711260668Z" level=error msg="encountered an error cleaning up failed sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.711385 containerd[1802]: time="2025-03-17T17:58:08.711337270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.711668 kubelet[2750]: E0317 17:58:08.711622 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:08.711779 kubelet[2750]: E0317 17:58:08.711692 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:08.711779 kubelet[2750]: E0317 17:58:08.711725 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:08.711892 kubelet[2750]: E0317 17:58:08.711794 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:09.060450 kubelet[2750]: E0317 17:58:09.060388 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:09.147949 kubelet[2750]: I0317 17:58:09.147901 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137" Mar 17 17:58:09.149152 containerd[1802]: time="2025-03-17T17:58:09.148700812Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:09.149152 containerd[1802]: time="2025-03-17T17:58:09.149037619Z" level=info msg="Ensure that sandbox 2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137 in task-service has been cleanup successfully" Mar 17 17:58:09.149584 containerd[1802]: time="2025-03-17T17:58:09.149331526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:58:09.149584 containerd[1802]: time="2025-03-17T17:58:09.149358026Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:09.149584 containerd[1802]: time="2025-03-17T17:58:09.149375127Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:09.151223 containerd[1802]: time="2025-03-17T17:58:09.150136344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:1,}" Mar 17 17:58:09.151320 kubelet[2750]: I0317 17:58:09.150655 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b" Mar 17 17:58:09.151379 containerd[1802]: time="2025-03-17T17:58:09.151292969Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:09.151562 containerd[1802]: time="2025-03-17T17:58:09.151534275Z" level=info msg="Ensure that sandbox c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b in task-service has been cleanup successfully" Mar 17 17:58:09.151729 containerd[1802]: time="2025-03-17T17:58:09.151708279Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:09.151729 containerd[1802]: time="2025-03-17T17:58:09.151725379Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:09.152179 containerd[1802]: time="2025-03-17T17:58:09.152155689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:1,}" Mar 17 17:58:09.171733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137-shm.mount: Deactivated successfully. Mar 17 17:58:09.171924 systemd[1]: run-netns-cni\x2d70987f68\x2dfce9\x2d46f3\x2dd98a\x2d3208671fe9cd.mount: Deactivated successfully. Mar 17 17:58:09.283139 containerd[1802]: time="2025-03-17T17:58:09.282128183Z" level=error msg="Failed to destroy network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.285000 containerd[1802]: time="2025-03-17T17:58:09.284887845Z" level=error msg="encountered an error cleaning up failed sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.285157 containerd[1802]: time="2025-03-17T17:58:09.284980847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.286276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830-shm.mount: Deactivated successfully. Mar 17 17:58:09.288387 kubelet[2750]: E0317 17:58:09.287950 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.288387 kubelet[2750]: E0317 17:58:09.288018 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:09.288387 kubelet[2750]: E0317 17:58:09.288051 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:09.288652 kubelet[2750]: E0317 17:58:09.288103 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:09.295014 containerd[1802]: time="2025-03-17T17:58:09.294979070Z" level=error msg="Failed to destroy network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.295465 containerd[1802]: time="2025-03-17T17:58:09.295431880Z" level=error msg="encountered an error cleaning up failed sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.295547 containerd[1802]: time="2025-03-17T17:58:09.295496281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.295753 kubelet[2750]: E0317 17:58:09.295717 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:09.295838 kubelet[2750]: E0317 17:58:09.295779 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:09.295838 kubelet[2750]: E0317 17:58:09.295806 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:09.295929 kubelet[2750]: E0317 17:58:09.295858 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:10.061046 kubelet[2750]: E0317 17:58:10.060975 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:10.155093 kubelet[2750]: I0317 17:58:10.155056 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31" Mar 17 17:58:10.156611 containerd[1802]: time="2025-03-17T17:58:10.156093350Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:10.156611 containerd[1802]: time="2025-03-17T17:58:10.156424457Z" level=info msg="Ensure that sandbox c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31 in task-service has been cleanup successfully" Mar 17 17:58:10.156978 containerd[1802]: time="2025-03-17T17:58:10.156724564Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:10.156978 containerd[1802]: time="2025-03-17T17:58:10.156750164Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:10.157427 containerd[1802]: time="2025-03-17T17:58:10.157219675Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:10.157427 containerd[1802]: time="2025-03-17T17:58:10.157327777Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:10.157427 containerd[1802]: time="2025-03-17T17:58:10.157346578Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:10.157661 kubelet[2750]: I0317 17:58:10.157419 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830" Mar 17 17:58:10.158058 containerd[1802]: time="2025-03-17T17:58:10.158018393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:2,}" Mar 17 17:58:10.158457 containerd[1802]: time="2025-03-17T17:58:10.158274398Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:10.158557 containerd[1802]: time="2025-03-17T17:58:10.158510504Z" level=info msg="Ensure that sandbox 72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830 in task-service has been cleanup successfully" Mar 17 17:58:10.158805 containerd[1802]: time="2025-03-17T17:58:10.158740809Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:10.158805 containerd[1802]: time="2025-03-17T17:58:10.158796510Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:10.159151 containerd[1802]: time="2025-03-17T17:58:10.159130417Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:10.159338 containerd[1802]: time="2025-03-17T17:58:10.159260520Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:10.159338 containerd[1802]: time="2025-03-17T17:58:10.159283321Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:10.159854 containerd[1802]: time="2025-03-17T17:58:10.159820433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:2,}" Mar 17 17:58:10.171371 systemd[1]: run-netns-cni\x2d7c22bf90\x2dd758\x2dbd89\x2df77e\x2db3ad8f2eff2d.mount: Deactivated successfully. Mar 17 17:58:10.171561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31-shm.mount: Deactivated successfully. Mar 17 17:58:10.171805 systemd[1]: run-netns-cni\x2d46100064\x2d93a6\x2db5c0\x2d3823\x2ded5755395352.mount: Deactivated successfully. Mar 17 17:58:10.293515 containerd[1802]: time="2025-03-17T17:58:10.293279405Z" level=error msg="Failed to destroy network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.294803 containerd[1802]: time="2025-03-17T17:58:10.294541834Z" level=error msg="encountered an error cleaning up failed sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.294803 containerd[1802]: time="2025-03-17T17:58:10.294646036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.294974 kubelet[2750]: E0317 17:58:10.294936 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.295032 kubelet[2750]: E0317 17:58:10.295003 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:10.295073 kubelet[2750]: E0317 17:58:10.295033 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:10.295118 kubelet[2750]: E0317 17:58:10.295088 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:10.317621 containerd[1802]: time="2025-03-17T17:58:10.317113536Z" level=error msg="Failed to destroy network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.317621 containerd[1802]: time="2025-03-17T17:58:10.317451144Z" level=error msg="encountered an error cleaning up failed sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.317621 containerd[1802]: time="2025-03-17T17:58:10.317524145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.318199 kubelet[2750]: E0317 17:58:10.317786 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:10.318199 kubelet[2750]: E0317 17:58:10.317854 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:10.318199 kubelet[2750]: E0317 17:58:10.317886 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:10.318482 kubelet[2750]: E0317 17:58:10.317947 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:11.061219 kubelet[2750]: E0317 17:58:11.061153 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:11.160904 kubelet[2750]: I0317 17:58:11.160866 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b" Mar 17 17:58:11.162615 containerd[1802]: time="2025-03-17T17:58:11.161888452Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:11.162615 containerd[1802]: time="2025-03-17T17:58:11.162187459Z" level=info msg="Ensure that sandbox 7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b in task-service has been cleanup successfully" Mar 17 17:58:11.162615 containerd[1802]: time="2025-03-17T17:58:11.162468465Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:11.162615 containerd[1802]: time="2025-03-17T17:58:11.162516166Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:11.162985 containerd[1802]: time="2025-03-17T17:58:11.162877474Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:11.163040 containerd[1802]: time="2025-03-17T17:58:11.162987277Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:11.163040 containerd[1802]: time="2025-03-17T17:58:11.163004377Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:11.163688 containerd[1802]: time="2025-03-17T17:58:11.163458587Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:11.163688 containerd[1802]: time="2025-03-17T17:58:11.163568090Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:11.163688 containerd[1802]: time="2025-03-17T17:58:11.163614691Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:11.164234 kubelet[2750]: I0317 17:58:11.164116 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7" Mar 17 17:58:11.164318 containerd[1802]: time="2025-03-17T17:58:11.164259505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:3,}" Mar 17 17:58:11.164855 containerd[1802]: time="2025-03-17T17:58:11.164816018Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:11.165063 containerd[1802]: time="2025-03-17T17:58:11.165041423Z" level=info msg="Ensure that sandbox 253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7 in task-service has been cleanup successfully" Mar 17 17:58:11.165206 containerd[1802]: time="2025-03-17T17:58:11.165186026Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:11.165286 containerd[1802]: time="2025-03-17T17:58:11.165205226Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:11.165470 containerd[1802]: time="2025-03-17T17:58:11.165452432Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:11.165615 containerd[1802]: time="2025-03-17T17:58:11.165537734Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:11.165615 containerd[1802]: time="2025-03-17T17:58:11.165554134Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:11.165921 containerd[1802]: time="2025-03-17T17:58:11.165874341Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:11.166006 containerd[1802]: time="2025-03-17T17:58:11.165970943Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:11.166006 containerd[1802]: time="2025-03-17T17:58:11.165986544Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:11.166784 containerd[1802]: time="2025-03-17T17:58:11.166407953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:3,}" Mar 17 17:58:11.171107 systemd[1]: run-netns-cni\x2d93be5809\x2df9b0\x2d039b\x2d45e7\x2d2f712a6311ef.mount: Deactivated successfully. Mar 17 17:58:11.171295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b-shm.mount: Deactivated successfully. Mar 17 17:58:11.171437 systemd[1]: run-netns-cni\x2d8ae2cb12\x2d5c67\x2d839d\x2d7710\x2d053ba151c1d8.mount: Deactivated successfully. Mar 17 17:58:11.171598 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7-shm.mount: Deactivated successfully. Mar 17 17:58:12.061871 kubelet[2750]: E0317 17:58:12.061799 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:12.256537 containerd[1802]: time="2025-03-17T17:58:12.256438732Z" level=error msg="Failed to destroy network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.261241 containerd[1802]: time="2025-03-17T17:58:12.259811207Z" level=error msg="encountered an error cleaning up failed sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.261241 containerd[1802]: time="2025-03-17T17:58:12.259905309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.261507 kubelet[2750]: E0317 17:58:12.260769 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.261507 kubelet[2750]: E0317 17:58:12.260840 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:12.261507 kubelet[2750]: E0317 17:58:12.260877 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:12.261337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562-shm.mount: Deactivated successfully. Mar 17 17:58:12.262251 kubelet[2750]: E0317 17:58:12.260930 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:12.268811 containerd[1802]: time="2025-03-17T17:58:12.268033890Z" level=error msg="Failed to destroy network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.268811 containerd[1802]: time="2025-03-17T17:58:12.268400498Z" level=error msg="encountered an error cleaning up failed sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.268811 containerd[1802]: time="2025-03-17T17:58:12.268506801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.270446 kubelet[2750]: E0317 17:58:12.270267 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:12.270446 kubelet[2750]: E0317 17:58:12.270341 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:12.270446 kubelet[2750]: E0317 17:58:12.270368 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:12.271084 kubelet[2750]: E0317 17:58:12.270658 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:12.273871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf-shm.mount: Deactivated successfully. Mar 17 17:58:13.049498 kubelet[2750]: E0317 17:58:13.049418 2750 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:13.062878 kubelet[2750]: E0317 17:58:13.062812 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:13.173685 kubelet[2750]: I0317 17:58:13.173541 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562" Mar 17 17:58:13.174605 containerd[1802]: time="2025-03-17T17:58:13.174419955Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:13.175151 containerd[1802]: time="2025-03-17T17:58:13.174907264Z" level=info msg="Ensure that sandbox f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562 in task-service has been cleanup successfully" Mar 17 17:58:13.175292 containerd[1802]: time="2025-03-17T17:58:13.175261171Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:13.175382 containerd[1802]: time="2025-03-17T17:58:13.175367873Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:13.175817 containerd[1802]: time="2025-03-17T17:58:13.175790282Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:13.176034 containerd[1802]: time="2025-03-17T17:58:13.176018186Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:13.176127 containerd[1802]: time="2025-03-17T17:58:13.176115688Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:13.176510 containerd[1802]: time="2025-03-17T17:58:13.176486795Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:13.176744 containerd[1802]: time="2025-03-17T17:58:13.176714599Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:13.177007 containerd[1802]: time="2025-03-17T17:58:13.176809801Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:13.177243 containerd[1802]: time="2025-03-17T17:58:13.177215209Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:13.177594 containerd[1802]: time="2025-03-17T17:58:13.177378612Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:13.177594 containerd[1802]: time="2025-03-17T17:58:13.177406413Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:13.178536 containerd[1802]: time="2025-03-17T17:58:13.178131327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:4,}" Mar 17 17:58:13.181329 systemd[1]: run-netns-cni\x2dc5107820\x2dc3d7\x2d81cb\x2dc047\x2de4a5e10026aa.mount: Deactivated successfully. Mar 17 17:58:13.184618 kubelet[2750]: I0317 17:58:13.184479 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf" Mar 17 17:58:13.185451 containerd[1802]: time="2025-03-17T17:58:13.185340866Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:13.185946 containerd[1802]: time="2025-03-17T17:58:13.185738874Z" level=info msg="Ensure that sandbox 936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf in task-service has been cleanup successfully" Mar 17 17:58:13.186637 containerd[1802]: time="2025-03-17T17:58:13.186106881Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:13.186637 containerd[1802]: time="2025-03-17T17:58:13.186129382Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:13.187034 containerd[1802]: time="2025-03-17T17:58:13.187004099Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:13.187215 containerd[1802]: time="2025-03-17T17:58:13.187197902Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:13.187324 containerd[1802]: time="2025-03-17T17:58:13.187309005Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:13.187684 containerd[1802]: time="2025-03-17T17:58:13.187663511Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:13.187857 containerd[1802]: time="2025-03-17T17:58:13.187839815Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:13.187941 containerd[1802]: time="2025-03-17T17:58:13.187925616Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:13.188931 containerd[1802]: time="2025-03-17T17:58:13.188515328Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:13.188931 containerd[1802]: time="2025-03-17T17:58:13.188620030Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:13.188931 containerd[1802]: time="2025-03-17T17:58:13.188635130Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:13.189294 containerd[1802]: time="2025-03-17T17:58:13.189267642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:4,}" Mar 17 17:58:13.191478 systemd[1]: run-netns-cni\x2d59a0592e\x2d87de\x2d4768\x2db3e9\x2daeb7879bcd7e.mount: Deactivated successfully. Mar 17 17:58:13.346464 containerd[1802]: time="2025-03-17T17:58:13.346325383Z" level=error msg="Failed to destroy network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.348567 containerd[1802]: time="2025-03-17T17:58:13.346731991Z" level=error msg="encountered an error cleaning up failed sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.348567 containerd[1802]: time="2025-03-17T17:58:13.346811792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.349014 kubelet[2750]: E0317 17:58:13.347096 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.349014 kubelet[2750]: E0317 17:58:13.347167 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:13.349014 kubelet[2750]: E0317 17:58:13.347201 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:13.349475 kubelet[2750]: E0317 17:58:13.347257 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:13.356448 containerd[1802]: time="2025-03-17T17:58:13.356087172Z" level=error msg="Failed to destroy network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.356448 containerd[1802]: time="2025-03-17T17:58:13.356419078Z" level=error msg="encountered an error cleaning up failed sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.356791 containerd[1802]: time="2025-03-17T17:58:13.356484880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.356868 kubelet[2750]: E0317 17:58:13.356715 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:13.356868 kubelet[2750]: E0317 17:58:13.356774 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:13.356868 kubelet[2750]: E0317 17:58:13.356801 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:13.357033 kubelet[2750]: E0317 17:58:13.356854 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:14.063261 kubelet[2750]: E0317 17:58:14.063212 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:14.179418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506-shm.mount: Deactivated successfully. Mar 17 17:58:14.179653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b-shm.mount: Deactivated successfully. Mar 17 17:58:14.194245 kubelet[2750]: I0317 17:58:14.194212 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b" Mar 17 17:58:14.195487 containerd[1802]: time="2025-03-17T17:58:14.195236616Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:14.195694 containerd[1802]: time="2025-03-17T17:58:14.195490221Z" level=info msg="Ensure that sandbox a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b in task-service has been cleanup successfully" Mar 17 17:58:14.195694 containerd[1802]: time="2025-03-17T17:58:14.195680525Z" level=info msg="TearDown network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" successfully" Mar 17 17:58:14.195801 containerd[1802]: time="2025-03-17T17:58:14.195701325Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" returns successfully" Mar 17 17:58:14.198638 containerd[1802]: time="2025-03-17T17:58:14.198417578Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:14.198638 containerd[1802]: time="2025-03-17T17:58:14.198515880Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:14.198638 containerd[1802]: time="2025-03-17T17:58:14.198532480Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:14.200324 systemd[1]: run-netns-cni\x2de0b71efc\x2de9a6\x2d28e9\x2d583f\x2d8efe43985eb0.mount: Deactivated successfully. Mar 17 17:58:14.205882 containerd[1802]: time="2025-03-17T17:58:14.203697880Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:14.205882 containerd[1802]: time="2025-03-17T17:58:14.203787882Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:14.205882 containerd[1802]: time="2025-03-17T17:58:14.203800682Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:14.207723 containerd[1802]: time="2025-03-17T17:58:14.207696758Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:14.207812 containerd[1802]: time="2025-03-17T17:58:14.207792960Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:14.207859 containerd[1802]: time="2025-03-17T17:58:14.207814160Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:14.209323 containerd[1802]: time="2025-03-17T17:58:14.209298289Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:14.209411 containerd[1802]: time="2025-03-17T17:58:14.209392891Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:14.209488 containerd[1802]: time="2025-03-17T17:58:14.209412791Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:14.210059 containerd[1802]: time="2025-03-17T17:58:14.210031303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:5,}" Mar 17 17:58:14.216663 kubelet[2750]: I0317 17:58:14.216632 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506" Mar 17 17:58:14.217300 containerd[1802]: time="2025-03-17T17:58:14.217273643Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:14.217564 containerd[1802]: time="2025-03-17T17:58:14.217539948Z" level=info msg="Ensure that sandbox e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506 in task-service has been cleanup successfully" Mar 17 17:58:14.220614 containerd[1802]: time="2025-03-17T17:58:14.219676290Z" level=info msg="TearDown network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" successfully" Mar 17 17:58:14.220614 containerd[1802]: time="2025-03-17T17:58:14.219701590Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" returns successfully" Mar 17 17:58:14.220985 containerd[1802]: time="2025-03-17T17:58:14.220959414Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:14.221190 containerd[1802]: time="2025-03-17T17:58:14.221065316Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:14.221190 containerd[1802]: time="2025-03-17T17:58:14.221135818Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:14.221800 containerd[1802]: time="2025-03-17T17:58:14.221706529Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:14.221800 containerd[1802]: time="2025-03-17T17:58:14.221792131Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:14.221920 containerd[1802]: time="2025-03-17T17:58:14.221805531Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:14.221900 systemd[1]: run-netns-cni\x2d14df0768\x2de47c\x2d2a18\x2d75ba\x2da4afb9dd45b3.mount: Deactivated successfully. Mar 17 17:58:14.223499 containerd[1802]: time="2025-03-17T17:58:14.222925252Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:14.223499 containerd[1802]: time="2025-03-17T17:58:14.223496964Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:14.223656 containerd[1802]: time="2025-03-17T17:58:14.223511364Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:14.227826 containerd[1802]: time="2025-03-17T17:58:14.224390981Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:14.227826 containerd[1802]: time="2025-03-17T17:58:14.224473082Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:14.227826 containerd[1802]: time="2025-03-17T17:58:14.224487883Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:14.229588 containerd[1802]: time="2025-03-17T17:58:14.229337977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:5,}" Mar 17 17:58:14.381177 containerd[1802]: time="2025-03-17T17:58:14.380909311Z" level=error msg="Failed to destroy network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.383889 containerd[1802]: time="2025-03-17T17:58:14.383847468Z" level=error msg="encountered an error cleaning up failed sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.384606 containerd[1802]: time="2025-03-17T17:58:14.383937569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.384738 kubelet[2750]: E0317 17:58:14.384216 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.384738 kubelet[2750]: E0317 17:58:14.384296 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:14.384738 kubelet[2750]: E0317 17:58:14.384326 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:14.385937 kubelet[2750]: E0317 17:58:14.384380 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:14.402441 containerd[1802]: time="2025-03-17T17:58:14.402400627Z" level=error msg="Failed to destroy network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.402997 containerd[1802]: time="2025-03-17T17:58:14.402964238Z" level=error msg="encountered an error cleaning up failed sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.403197 containerd[1802]: time="2025-03-17T17:58:14.403157541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.403769 kubelet[2750]: E0317 17:58:14.403733 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:14.404187 kubelet[2750]: E0317 17:58:14.404122 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:14.404478 kubelet[2750]: E0317 17:58:14.404304 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:14.404478 kubelet[2750]: E0317 17:58:14.404388 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:15.064241 kubelet[2750]: E0317 17:58:15.064188 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:15.160467 containerd[1802]: time="2025-03-17T17:58:15.160406201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:15.162493 containerd[1802]: time="2025-03-17T17:58:15.162432840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:58:15.165678 containerd[1802]: time="2025-03-17T17:58:15.165619402Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:15.169395 containerd[1802]: time="2025-03-17T17:58:15.169344674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:15.170091 containerd[1802]: time="2025-03-17T17:58:15.169932385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.020562759s" Mar 17 17:58:15.170091 containerd[1802]: time="2025-03-17T17:58:15.169975086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:58:15.181761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff-shm.mount: Deactivated successfully. Mar 17 17:58:15.182122 containerd[1802]: time="2025-03-17T17:58:15.182063720Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:58:15.182665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337898377.mount: Deactivated successfully. Mar 17 17:58:15.220142 kubelet[2750]: I0317 17:58:15.220043 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff" Mar 17 17:58:15.221151 containerd[1802]: time="2025-03-17T17:58:15.220816970Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.221415682Z" level=info msg="Ensure that sandbox d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff in task-service has been cleanup successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.223176016Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.223387320Z" level=info msg="Ensure that sandbox a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81 in task-service has been cleanup successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.223605224Z" level=info msg="TearDown network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.223630925Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" returns successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.223935930Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224029732Z" level=info msg="TearDown network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224046133Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" returns successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224282937Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224371139Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224385139Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224690745Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224773447Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.224787647Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.225160354Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.225259356Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.225274356Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:15.225602 containerd[1802]: time="2025-03-17T17:58:15.225538162Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:15.226288 kubelet[2750]: I0317 17:58:15.222670 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81" Mar 17 17:58:15.226348 containerd[1802]: time="2025-03-17T17:58:15.225640163Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:15.226348 containerd[1802]: time="2025-03-17T17:58:15.225655164Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:15.226348 containerd[1802]: time="2025-03-17T17:58:15.226046471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:6,}" Mar 17 17:58:15.226495 containerd[1802]: time="2025-03-17T17:58:15.226477980Z" level=info msg="TearDown network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" successfully" Mar 17 17:58:15.226540 containerd[1802]: time="2025-03-17T17:58:15.226499480Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" returns successfully" Mar 17 17:58:15.226980 containerd[1802]: time="2025-03-17T17:58:15.226868287Z" level=info msg="CreateContainer within sandbox \"c10093cad22dbb4c04b4c7188a5ce22bbbc202014901022215c1630bfeed5564\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a4cc343da25706ebf5faa9c091e3883e3ebaf70f975232bda6df88e86310b96\"" Mar 17 17:58:15.227218 containerd[1802]: time="2025-03-17T17:58:15.227196294Z" level=info msg="StartContainer for \"0a4cc343da25706ebf5faa9c091e3883e3ebaf70f975232bda6df88e86310b96\"" Mar 17 17:58:15.228781 systemd[1]: run-netns-cni\x2d99092366\x2d7a44\x2dd452\x2db48b\x2d36a22938b85c.mount: Deactivated successfully. Mar 17 17:58:15.231223 containerd[1802]: time="2025-03-17T17:58:15.231015468Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:15.231223 containerd[1802]: time="2025-03-17T17:58:15.231114069Z" level=info msg="TearDown network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" successfully" Mar 17 17:58:15.231223 containerd[1802]: time="2025-03-17T17:58:15.231129270Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" returns successfully" Mar 17 17:58:15.234463 containerd[1802]: time="2025-03-17T17:58:15.234324732Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:15.234463 containerd[1802]: time="2025-03-17T17:58:15.234428434Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:15.234463 containerd[1802]: time="2025-03-17T17:58:15.234443734Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:15.234802 systemd[1]: run-netns-cni\x2d707289b5\x2da757\x2dfc61\x2df9f0\x2def05f7729994.mount: Deactivated successfully. Mar 17 17:58:15.237087 containerd[1802]: time="2025-03-17T17:58:15.237061585Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:15.237172 containerd[1802]: time="2025-03-17T17:58:15.237153086Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:15.237227 containerd[1802]: time="2025-03-17T17:58:15.237167987Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:15.237516 containerd[1802]: time="2025-03-17T17:58:15.237495193Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:15.237707 containerd[1802]: time="2025-03-17T17:58:15.237689897Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:15.237797 containerd[1802]: time="2025-03-17T17:58:15.237783599Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:15.238397 containerd[1802]: time="2025-03-17T17:58:15.238367010Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:15.238667 containerd[1802]: time="2025-03-17T17:58:15.238608815Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:15.238667 containerd[1802]: time="2025-03-17T17:58:15.238642215Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:15.239716 containerd[1802]: time="2025-03-17T17:58:15.239690035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:6,}" Mar 17 17:58:15.317674 containerd[1802]: time="2025-03-17T17:58:15.317227336Z" level=info msg="StartContainer for \"0a4cc343da25706ebf5faa9c091e3883e3ebaf70f975232bda6df88e86310b96\" returns successfully" Mar 17 17:58:15.384321 containerd[1802]: time="2025-03-17T17:58:15.383954928Z" level=error msg="Failed to destroy network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.385144 containerd[1802]: time="2025-03-17T17:58:15.384938447Z" level=error msg="encountered an error cleaning up failed sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.385144 containerd[1802]: time="2025-03-17T17:58:15.385022949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.386036 kubelet[2750]: E0317 17:58:15.385598 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.386036 kubelet[2750]: E0317 17:58:15.385674 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:15.386036 kubelet[2750]: E0317 17:58:15.385702 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6kcj" Mar 17 17:58:15.386531 kubelet[2750]: E0317 17:58:15.385765 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6kcj_calico-system(a943f23c-759b-4919-8091-067e8ba38e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6kcj" podUID="a943f23c-759b-4919-8091-067e8ba38e73" Mar 17 17:58:15.392500 containerd[1802]: time="2025-03-17T17:58:15.392473193Z" level=error msg="Failed to destroy network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.392831 containerd[1802]: time="2025-03-17T17:58:15.392799799Z" level=error msg="encountered an error cleaning up failed sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.392923 containerd[1802]: time="2025-03-17T17:58:15.392885201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.393563 kubelet[2750]: E0317 17:58:15.393140 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:58:15.393563 kubelet[2750]: E0317 17:58:15.393206 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:15.393563 kubelet[2750]: E0317 17:58:15.393246 2750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sdm9p" Mar 17 17:58:15.393752 kubelet[2750]: E0317 17:58:15.393307 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sdm9p_default(7ad6f385-e692-43a7-9885-d0ad267f32c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sdm9p" podUID="7ad6f385-e692-43a7-9885-d0ad267f32c1" Mar 17 17:58:15.609831 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:58:15.609985 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:58:16.065505 kubelet[2750]: E0317 17:58:16.065448 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:16.227655 kubelet[2750]: I0317 17:58:16.227622 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64" Mar 17 17:58:16.228310 containerd[1802]: time="2025-03-17T17:58:16.228270173Z" level=info msg="StopPodSandbox for \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\"" Mar 17 17:58:16.228540 containerd[1802]: time="2025-03-17T17:58:16.228511778Z" level=info msg="Ensure that sandbox 72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64 in task-service has been cleanup successfully" Mar 17 17:58:16.231279 containerd[1802]: time="2025-03-17T17:58:16.231250231Z" level=info msg="TearDown network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" successfully" Mar 17 17:58:16.231883 containerd[1802]: time="2025-03-17T17:58:16.231790741Z" level=info msg="StopPodSandbox for \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" returns successfully" Mar 17 17:58:16.232748 containerd[1802]: time="2025-03-17T17:58:16.232430553Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" Mar 17 17:58:16.232748 containerd[1802]: time="2025-03-17T17:58:16.232530555Z" level=info msg="TearDown network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" successfully" Mar 17 17:58:16.232748 containerd[1802]: time="2025-03-17T17:58:16.232546456Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" returns successfully" Mar 17 17:58:16.233341 containerd[1802]: time="2025-03-17T17:58:16.233231769Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:16.233433 systemd[1]: run-netns-cni\x2da677aceb\x2d5797\x2d8e44\x2d453d\x2db583f7ef7c33.mount: Deactivated successfully. Mar 17 17:58:16.233770 containerd[1802]: time="2025-03-17T17:58:16.233743079Z" level=info msg="TearDown network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" successfully" Mar 17 17:58:16.233770 containerd[1802]: time="2025-03-17T17:58:16.233761179Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" returns successfully" Mar 17 17:58:16.234394 kubelet[2750]: I0317 17:58:16.234303 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c" Mar 17 17:58:16.235107 containerd[1802]: time="2025-03-17T17:58:16.235016903Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:16.235183 containerd[1802]: time="2025-03-17T17:58:16.235114305Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:16.235183 containerd[1802]: time="2025-03-17T17:58:16.235129406Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:16.236371 containerd[1802]: time="2025-03-17T17:58:16.235847820Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:16.236371 containerd[1802]: time="2025-03-17T17:58:16.235904121Z" level=info msg="StopPodSandbox for \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\"" Mar 17 17:58:16.236371 containerd[1802]: time="2025-03-17T17:58:16.236029923Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:16.236371 containerd[1802]: time="2025-03-17T17:58:16.236254327Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:16.236832 containerd[1802]: time="2025-03-17T17:58:16.236813338Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:16.236990 containerd[1802]: time="2025-03-17T17:58:16.236972841Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.237070743Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.236848939Z" level=info msg="Ensure that sandbox 912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c in task-service has been cleanup successfully" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.238318667Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.238463870Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.238479970Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:16.241634 containerd[1802]: time="2025-03-17T17:58:16.239287786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:7,}" Mar 17 17:58:16.242059 containerd[1802]: time="2025-03-17T17:58:16.241987738Z" level=info msg="TearDown network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" successfully" Mar 17 17:58:16.242059 containerd[1802]: time="2025-03-17T17:58:16.242010239Z" level=info msg="StopPodSandbox for \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" returns successfully" Mar 17 17:58:16.242821 systemd[1]: run-netns-cni\x2dd9cc6136\x2d75d3\x2d6095\x2def31\x2dc767a2b96aa5.mount: Deactivated successfully. Mar 17 17:58:16.243917 containerd[1802]: time="2025-03-17T17:58:16.243715072Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" Mar 17 17:58:16.243917 containerd[1802]: time="2025-03-17T17:58:16.243828774Z" level=info msg="TearDown network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" successfully" Mar 17 17:58:16.243917 containerd[1802]: time="2025-03-17T17:58:16.243843874Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" returns successfully" Mar 17 17:58:16.244253 containerd[1802]: time="2025-03-17T17:58:16.244233382Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:16.244618 containerd[1802]: time="2025-03-17T17:58:16.244593089Z" level=info msg="TearDown network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" successfully" Mar 17 17:58:16.244694 containerd[1802]: time="2025-03-17T17:58:16.244618589Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" returns successfully" Mar 17 17:58:16.247167 containerd[1802]: time="2025-03-17T17:58:16.246845232Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:16.247167 containerd[1802]: time="2025-03-17T17:58:16.247003035Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:16.247167 containerd[1802]: time="2025-03-17T17:58:16.247018136Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:16.247586 containerd[1802]: time="2025-03-17T17:58:16.247409643Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:16.247586 containerd[1802]: time="2025-03-17T17:58:16.247497345Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:16.247586 containerd[1802]: time="2025-03-17T17:58:16.247512945Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:16.248696 containerd[1802]: time="2025-03-17T17:58:16.248420363Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:16.248990 containerd[1802]: time="2025-03-17T17:58:16.248861871Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:16.249090 containerd[1802]: time="2025-03-17T17:58:16.249072576Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:16.250716 containerd[1802]: time="2025-03-17T17:58:16.250691807Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:16.250809 containerd[1802]: time="2025-03-17T17:58:16.250777909Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:16.250809 containerd[1802]: time="2025-03-17T17:58:16.250793109Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:16.251490 containerd[1802]: time="2025-03-17T17:58:16.251282518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:7,}" Mar 17 17:58:16.263361 kubelet[2750]: I0317 17:58:16.263300 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gbq5z" podStartSLOduration=4.431426915 podStartE2EDuration="23.263277151s" podCreationTimestamp="2025-03-17 17:57:53 +0000 UTC" firstStartedPulling="2025-03-17 17:57:56.339098269 +0000 UTC m=+4.098316298" lastFinishedPulling="2025-03-17 17:58:15.170948405 +0000 UTC m=+22.930166534" observedRunningTime="2025-03-17 17:58:16.262541736 +0000 UTC m=+24.021759765" watchObservedRunningTime="2025-03-17 17:58:16.263277151 +0000 UTC m=+24.022495180" Mar 17 17:58:16.451772 systemd-networkd[1363]: cali386cda18846: Link UP Mar 17 17:58:16.452457 systemd-networkd[1363]: cali386cda18846: Gained carrier Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.349 [INFO][3705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.362 [INFO][3705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0 nginx-deployment-85f456d6dd- default 7ad6f385-e692-43a7-9885-d0ad267f32c1 1184 0 2025-03-17 17:58:08 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.34 nginx-deployment-85f456d6dd-sdm9p eth0 default [] [] [kns.default ksa.default.default] cali386cda18846 [] []}} ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.362 [INFO][3705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.399 [INFO][3732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" HandleID="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Workload="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.410 [INFO][3732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" HandleID="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Workload="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927c0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.34", "pod":"nginx-deployment-85f456d6dd-sdm9p", "timestamp":"2025-03-17 17:58:16.399252583 +0000 UTC"}, Hostname:"10.200.8.34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.410 [INFO][3732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.410 [INFO][3732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.410 [INFO][3732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.34' Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.412 [INFO][3732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.415 [INFO][3732] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.418 [INFO][3732] ipam/ipam.go 489: Trying affinity for 192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.420 [INFO][3732] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.422 [INFO][3732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.422 [INFO][3732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.423 [INFO][3732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2 Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.428 [INFO][3732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.129/26] block=192.168.45.128/26 handle="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.129/26] handle="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" host="10.200.8.34" Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:16.463692 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.129/26] IPv6=[] ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" HandleID="k8s-pod-network.126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Workload="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.436 [INFO][3705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7ad6f385-e692-43a7-9885-d0ad267f32c1", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-sdm9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali386cda18846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.436 [INFO][3705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.129/32] ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.436 [INFO][3705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali386cda18846 ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.450 [INFO][3705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.450 [INFO][3705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7ad6f385-e692-43a7-9885-d0ad267f32c1", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2", Pod:"nginx-deployment-85f456d6dd-sdm9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali386cda18846", MAC:"d2:48:06:3d:cf:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:16.465927 containerd[1802]: 2025-03-17 17:58:16.461 [INFO][3705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2" Namespace="default" Pod="nginx-deployment-85f456d6dd-sdm9p" WorkloadEndpoint="10.200.8.34-k8s-nginx--deployment--85f456d6dd--sdm9p-eth0" Mar 17 17:58:16.470410 systemd-networkd[1363]: caliaded8736776: Link UP Mar 17 17:58:16.471337 systemd-networkd[1363]: caliaded8736776: Gained carrier Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.355 [INFO][3711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.367 [INFO][3711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.34-k8s-csi--node--driver--l6kcj-eth0 csi-node-driver- calico-system a943f23c-759b-4919-8091-067e8ba38e73 1114 0 2025-03-17 17:57:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.8.34 csi-node-driver-l6kcj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaded8736776 [] []}} ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.367 [INFO][3711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.402 [INFO][3737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" HandleID="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Workload="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.412 [INFO][3737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" HandleID="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Workload="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293380), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.34", "pod":"csi-node-driver-l6kcj", "timestamp":"2025-03-17 17:58:16.402781751 +0000 UTC"}, Hostname:"10.200.8.34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.412 [INFO][3737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.433 [INFO][3737] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.34' Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.435 [INFO][3737] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.438 [INFO][3737] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.442 [INFO][3737] ipam/ipam.go 489: Trying affinity for 192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.444 [INFO][3737] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.445 [INFO][3737] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.445 [INFO][3737] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.447 [INFO][3737] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.454 [INFO][3737] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.461 [INFO][3737] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.130/26] block=192.168.45.128/26 handle="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.461 [INFO][3737] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.130/26] handle="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" host="10.200.8.34" Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.461 [INFO][3737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:16.486062 containerd[1802]: 2025-03-17 17:58:16.462 [INFO][3737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.130/26] IPv6=[] ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" HandleID="k8s-pod-network.c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Workload="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.467 [INFO][3711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-csi--node--driver--l6kcj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a943f23c-759b-4919-8091-067e8ba38e73", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"", Pod:"csi-node-driver-l6kcj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaded8736776", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.467 [INFO][3711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.130/32] ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.467 [INFO][3711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaded8736776 ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.470 [INFO][3711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.470 [INFO][3711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-csi--node--driver--l6kcj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a943f23c-759b-4919-8091-067e8ba38e73", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be", Pod:"csi-node-driver-l6kcj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaded8736776", MAC:"52:d8:47:d8:bd:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:16.487067 containerd[1802]: 2025-03-17 17:58:16.482 [INFO][3711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be" Namespace="calico-system" Pod="csi-node-driver-l6kcj" WorkloadEndpoint="10.200.8.34-k8s-csi--node--driver--l6kcj-eth0" Mar 17 17:58:16.496075 containerd[1802]: time="2025-03-17T17:58:16.495731450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:16.496075 containerd[1802]: time="2025-03-17T17:58:16.495821452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:16.496075 containerd[1802]: time="2025-03-17T17:58:16.495838753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:16.496075 containerd[1802]: time="2025-03-17T17:58:16.495947555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:16.520360 containerd[1802]: time="2025-03-17T17:58:16.520076922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:16.520360 containerd[1802]: time="2025-03-17T17:58:16.520143523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:16.520360 containerd[1802]: time="2025-03-17T17:58:16.520165823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:16.520360 containerd[1802]: time="2025-03-17T17:58:16.520257125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:16.571029 containerd[1802]: time="2025-03-17T17:58:16.570949707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6kcj,Uid:a943f23c-759b-4919-8091-067e8ba38e73,Namespace:calico-system,Attempt:7,} returns sandbox id \"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be\"" Mar 17 17:58:16.571887 containerd[1802]: time="2025-03-17T17:58:16.571674521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sdm9p,Uid:7ad6f385-e692-43a7-9885-d0ad267f32c1,Namespace:default,Attempt:7,} returns sandbox id \"126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2\"" Mar 17 17:58:16.573271 containerd[1802]: time="2025-03-17T17:58:16.573102248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:58:17.066622 kubelet[2750]: E0317 17:58:17.066534 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:17.196686 kernel: bpftool[3963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:58:17.472256 systemd-networkd[1363]: vxlan.calico: Link UP Mar 17 17:58:17.472783 systemd-networkd[1363]: vxlan.calico: Gained carrier Mar 17 17:58:17.682223 systemd-networkd[1363]: cali386cda18846: Gained IPv6LL Mar 17 17:58:18.057698 containerd[1802]: time="2025-03-17T17:58:18.057544085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:18.060447 containerd[1802]: time="2025-03-17T17:58:18.060383840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:58:18.065716 containerd[1802]: time="2025-03-17T17:58:18.065667242Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:18.067601 kubelet[2750]: E0317 17:58:18.066706 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:18.075174 containerd[1802]: time="2025-03-17T17:58:18.074614615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:18.075369 containerd[1802]: time="2025-03-17T17:58:18.075344729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.50220128s" Mar 17 17:58:18.075491 containerd[1802]: time="2025-03-17T17:58:18.075471532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:58:18.077053 containerd[1802]: time="2025-03-17T17:58:18.076980161Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:58:18.079915 containerd[1802]: time="2025-03-17T17:58:18.079737614Z" level=info msg="CreateContainer within sandbox \"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:58:18.125454 containerd[1802]: time="2025-03-17T17:58:18.125409399Z" level=info msg="CreateContainer within sandbox \"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dd2993a42120e465b1573a456d7a92440646cec61ca5ebfbfc39dd996fc74bcd\"" Mar 17 17:58:18.126079 containerd[1802]: time="2025-03-17T17:58:18.125999810Z" level=info msg="StartContainer for \"dd2993a42120e465b1573a456d7a92440646cec61ca5ebfbfc39dd996fc74bcd\"" Mar 17 17:58:18.184502 containerd[1802]: time="2025-03-17T17:58:18.184451541Z" level=info msg="StartContainer for \"dd2993a42120e465b1573a456d7a92440646cec61ca5ebfbfc39dd996fc74bcd\" returns successfully" Mar 17 17:58:18.193750 systemd-networkd[1363]: caliaded8736776: Gained IPv6LL Mar 17 17:58:19.067362 kubelet[2750]: E0317 17:58:19.067309 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:19.347727 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Mar 17 17:58:20.067593 kubelet[2750]: E0317 17:58:20.067525 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:20.616425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131378506.mount: Deactivated successfully. Mar 17 17:58:21.068725 kubelet[2750]: E0317 17:58:21.068565 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:21.826133 containerd[1802]: time="2025-03-17T17:58:21.826081053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:21.828751 containerd[1802]: time="2025-03-17T17:58:21.828689397Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 17:58:21.831150 containerd[1802]: time="2025-03-17T17:58:21.831094037Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:21.835754 containerd[1802]: time="2025-03-17T17:58:21.835701414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:21.836753 containerd[1802]: time="2025-03-17T17:58:21.836559228Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 3.759543066s" Mar 17 17:58:21.836753 containerd[1802]: time="2025-03-17T17:58:21.836610329Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:58:21.838354 containerd[1802]: time="2025-03-17T17:58:21.838326858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:58:21.839073 containerd[1802]: time="2025-03-17T17:58:21.839047870Z" level=info msg="CreateContainer within sandbox \"126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:58:21.868722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623366057.mount: Deactivated successfully. Mar 17 17:58:21.872852 containerd[1802]: time="2025-03-17T17:58:21.872818534Z" level=info msg="CreateContainer within sandbox \"126f8678a87354db52ac200c2f902f35f1aa9a4002cf89d6d61fe609d29208e2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e0f1bd4ee8b4a6e1e208aa39c262f1989bdf524d2795c0290c47b318e5176a67\"" Mar 17 17:58:21.873422 containerd[1802]: time="2025-03-17T17:58:21.873307542Z" level=info msg="StartContainer for \"e0f1bd4ee8b4a6e1e208aa39c262f1989bdf524d2795c0290c47b318e5176a67\"" Mar 17 17:58:21.927248 containerd[1802]: time="2025-03-17T17:58:21.927132942Z" level=info msg="StartContainer for \"e0f1bd4ee8b4a6e1e208aa39c262f1989bdf524d2795c0290c47b318e5176a67\" returns successfully" Mar 17 17:58:22.069418 kubelet[2750]: E0317 17:58:22.069349 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:22.278976 kubelet[2750]: I0317 17:58:22.278922 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-sdm9p" podStartSLOduration=9.01429522 podStartE2EDuration="14.278903819s" podCreationTimestamp="2025-03-17 17:58:08 +0000 UTC" firstStartedPulling="2025-03-17 17:58:16.57320085 +0000 UTC m=+24.332418979" lastFinishedPulling="2025-03-17 17:58:21.837809449 +0000 UTC m=+29.597027578" observedRunningTime="2025-03-17 17:58:22.278834818 +0000 UTC m=+30.038052847" watchObservedRunningTime="2025-03-17 17:58:22.278903819 +0000 UTC m=+30.038121848" Mar 17 17:58:23.070562 kubelet[2750]: E0317 17:58:23.070480 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:23.600546 containerd[1802]: time="2025-03-17T17:58:23.600491901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:23.602413 containerd[1802]: time="2025-03-17T17:58:23.602350432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:58:23.605256 containerd[1802]: time="2025-03-17T17:58:23.605204779Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:23.609533 containerd[1802]: time="2025-03-17T17:58:23.609483251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:23.610519 containerd[1802]: time="2025-03-17T17:58:23.610086161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.771668902s" Mar 17 17:58:23.610519 containerd[1802]: time="2025-03-17T17:58:23.610124861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:58:23.612359 containerd[1802]: time="2025-03-17T17:58:23.612322698Z" level=info msg="CreateContainer within sandbox \"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:58:23.645988 containerd[1802]: time="2025-03-17T17:58:23.645947660Z" level=info msg="CreateContainer within sandbox \"c4243931c21041d1d1d01af445db3b83cf6e5b1a8846a9a81378747eb2e736be\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7fa54625c0d2645e916a8a3ae451aa517abca468a538042e6ced648984fcdcec\"" Mar 17 17:58:23.646464 containerd[1802]: time="2025-03-17T17:58:23.646439068Z" level=info msg="StartContainer for \"7fa54625c0d2645e916a8a3ae451aa517abca468a538042e6ced648984fcdcec\"" Mar 17 17:58:23.705880 containerd[1802]: time="2025-03-17T17:58:23.705826760Z" level=info msg="StartContainer for \"7fa54625c0d2645e916a8a3ae451aa517abca468a538042e6ced648984fcdcec\" returns successfully" Mar 17 17:58:24.071729 kubelet[2750]: E0317 17:58:24.071664 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:24.149255 kubelet[2750]: I0317 17:58:24.149219 2750 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:58:24.149255 kubelet[2750]: I0317 17:58:24.149256 2750 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:58:24.295798 kubelet[2750]: I0317 17:58:24.295749 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l6kcj" podStartSLOduration=24.257442482 podStartE2EDuration="31.295731217s" podCreationTimestamp="2025-03-17 17:57:53 +0000 UTC" firstStartedPulling="2025-03-17 17:58:16.57266344 +0000 UTC m=+24.331881569" lastFinishedPulling="2025-03-17 17:58:23.610952275 +0000 UTC m=+31.370170304" observedRunningTime="2025-03-17 17:58:24.295693316 +0000 UTC m=+32.054911345" watchObservedRunningTime="2025-03-17 17:58:24.295731217 +0000 UTC m=+32.054949346" Mar 17 17:58:25.072001 kubelet[2750]: E0317 17:58:25.071929 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:26.072757 kubelet[2750]: E0317 17:58:26.072683 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:27.073947 kubelet[2750]: E0317 17:58:27.073882 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:28.074944 kubelet[2750]: E0317 17:58:28.074876 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:29.075494 kubelet[2750]: E0317 17:58:29.075429 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:30.075923 kubelet[2750]: E0317 17:58:30.075852 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:31.076357 kubelet[2750]: E0317 17:58:31.076293 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:32.076775 kubelet[2750]: E0317 17:58:32.076705 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:32.692081 kubelet[2750]: I0317 17:58:32.692039 2750 topology_manager.go:215] "Topology Admit Handler" podUID="4becbfa9-1c09-4d0f-82b1-36bb4b3d809b" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 17:58:32.727372 kubelet[2750]: I0317 17:58:32.727280 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4becbfa9-1c09-4d0f-82b1-36bb4b3d809b-data\") pod \"nfs-server-provisioner-0\" (UID: \"4becbfa9-1c09-4d0f-82b1-36bb4b3d809b\") " pod="default/nfs-server-provisioner-0" Mar 17 17:58:32.727372 kubelet[2750]: I0317 17:58:32.727348 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89lt\" (UniqueName: \"kubernetes.io/projected/4becbfa9-1c09-4d0f-82b1-36bb4b3d809b-kube-api-access-b89lt\") pod \"nfs-server-provisioner-0\" (UID: \"4becbfa9-1c09-4d0f-82b1-36bb4b3d809b\") " pod="default/nfs-server-provisioner-0" Mar 17 17:58:32.996353 containerd[1802]: time="2025-03-17T17:58:32.996195027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4becbfa9-1c09-4d0f-82b1-36bb4b3d809b,Namespace:default,Attempt:0,}" Mar 17 17:58:33.050040 kubelet[2750]: E0317 17:58:33.049994 2750 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:33.077508 kubelet[2750]: E0317 17:58:33.077430 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:33.132143 systemd-networkd[1363]: cali60e51b789ff: Link UP Mar 17 17:58:33.132391 systemd-networkd[1363]: cali60e51b789ff: Gained carrier Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.064 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.34-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 4becbfa9-1c09-4d0f-82b1-36bb4b3d809b 1316 0 2025-03-17 17:58:32 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.34 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.064 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.088 [INFO][4259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" HandleID="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Workload="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.100 [INFO][4259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" HandleID="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Workload="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000313210), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.34", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:58:33.088208578 +0000 UTC"}, Hostname:"10.200.8.34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.100 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.100 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.101 [INFO][4259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.34' Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.102 [INFO][4259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.105 [INFO][4259] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.109 [INFO][4259] ipam/ipam.go 489: Trying affinity for 192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.110 [INFO][4259] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.112 [INFO][4259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.112 [INFO][4259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.113 [INFO][4259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7 Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.118 [INFO][4259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.127 [INFO][4259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.131/26] block=192.168.45.128/26 handle="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.127 [INFO][4259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.131/26] handle="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" host="10.200.8.34" Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.127 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:58:33.144939 containerd[1802]: 2025-03-17 17:58:33.127 [INFO][4259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.131/26] IPv6=[] ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" HandleID="k8s-pod-network.258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Workload="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.146023 containerd[1802]: 2025-03-17 17:58:33.128 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4becbfa9-1c09-4d0f-82b1-36bb4b3d809b", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:33.146023 containerd[1802]: 2025-03-17 17:58:33.128 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.131/32] ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.146023 containerd[1802]: 2025-03-17 17:58:33.128 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.146023 containerd[1802]: 2025-03-17 17:58:33.132 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.146411 containerd[1802]: 2025-03-17 17:58:33.133 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4becbfa9-1c09-4d0f-82b1-36bb4b3d809b", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"c6:20:55:0c:65:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:58:33.146411 containerd[1802]: 2025-03-17 17:58:33.143 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.34-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:58:33.172543 containerd[1802]: time="2025-03-17T17:58:33.172226876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:33.172543 containerd[1802]: time="2025-03-17T17:58:33.172300578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:33.172543 containerd[1802]: time="2025-03-17T17:58:33.172331478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:33.173234 containerd[1802]: time="2025-03-17T17:58:33.172449780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:33.244343 containerd[1802]: time="2025-03-17T17:58:33.244292047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4becbfa9-1c09-4d0f-82b1-36bb4b3d809b,Namespace:default,Attempt:0,} returns sandbox id \"258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7\"" Mar 17 17:58:33.246010 containerd[1802]: time="2025-03-17T17:58:33.245944179Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:58:33.844096 systemd[1]: run-containerd-runc-k8s.io-258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7-runc.eeTd9o.mount: Deactivated successfully. Mar 17 17:58:34.077697 kubelet[2750]: E0317 17:58:34.077606 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:35.078503 kubelet[2750]: E0317 17:58:35.078451 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:35.153778 systemd-networkd[1363]: cali60e51b789ff: Gained IPv6LL Mar 17 17:58:35.755051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812874563.mount: Deactivated successfully. Mar 17 17:58:36.079550 kubelet[2750]: E0317 17:58:36.079292 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:37.080392 kubelet[2750]: E0317 17:58:37.080331 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:38.080921 kubelet[2750]: E0317 17:58:38.080478 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:38.339175 containerd[1802]: time="2025-03-17T17:58:38.339027974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:38.341914 containerd[1802]: time="2025-03-17T17:58:38.341832227Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Mar 17 17:58:38.345387 containerd[1802]: time="2025-03-17T17:58:38.345325994Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:38.351245 containerd[1802]: time="2025-03-17T17:58:38.351185905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:38.352502 containerd[1802]: time="2025-03-17T17:58:38.352119322Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.106139543s" Mar 17 17:58:38.352502 containerd[1802]: time="2025-03-17T17:58:38.352157323Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 17:58:38.354680 containerd[1802]: time="2025-03-17T17:58:38.354655270Z" level=info msg="CreateContainer within sandbox \"258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:58:38.389777 containerd[1802]: time="2025-03-17T17:58:38.389739535Z" level=info msg="CreateContainer within sandbox \"258b0e9a221dcfcf56afa4a5a964402a2d4a2f4361f0e1b348336cc989a9e8b7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1b7341ac3d2ead96bf663f078b3325729e81af0dd3b2c1f78155076fee679551\"" Mar 17 17:58:38.390215 containerd[1802]: time="2025-03-17T17:58:38.390155043Z" level=info msg="StartContainer for \"1b7341ac3d2ead96bf663f078b3325729e81af0dd3b2c1f78155076fee679551\"" Mar 17 17:58:38.442384 containerd[1802]: time="2025-03-17T17:58:38.442196729Z" level=info msg="StartContainer for \"1b7341ac3d2ead96bf663f078b3325729e81af0dd3b2c1f78155076fee679551\" returns successfully" Mar 17 17:58:39.081289 kubelet[2750]: E0317 17:58:39.081223 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:39.331014 kubelet[2750]: I0317 17:58:39.330957 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.223324398 podStartE2EDuration="7.330940569s" podCreationTimestamp="2025-03-17 17:58:32 +0000 UTC" firstStartedPulling="2025-03-17 17:58:33.245529171 +0000 UTC m=+41.004747300" lastFinishedPulling="2025-03-17 17:58:38.353145442 +0000 UTC m=+46.112363471" observedRunningTime="2025-03-17 17:58:39.328796628 +0000 UTC m=+47.088014657" watchObservedRunningTime="2025-03-17 17:58:39.330940569 +0000 UTC m=+47.090158598" Mar 17 17:58:40.081445 kubelet[2750]: E0317 17:58:40.081377 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:41.082229 kubelet[2750]: E0317 17:58:41.082165 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:42.083356 kubelet[2750]: E0317 17:58:42.083289 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:43.083588 kubelet[2750]: E0317 17:58:43.083531 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:44.084022 kubelet[2750]: E0317 17:58:44.083969 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:45.084870 kubelet[2750]: E0317 17:58:45.084816 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:46.085925 kubelet[2750]: E0317 17:58:46.085861 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:47.086480 kubelet[2750]: E0317 17:58:47.086414 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:48.087221 kubelet[2750]: E0317 17:58:48.087152 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:49.088071 kubelet[2750]: E0317 17:58:49.088003 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:50.088839 kubelet[2750]: E0317 17:58:50.088781 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:51.089533 kubelet[2750]: E0317 17:58:51.089467 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:52.090406 kubelet[2750]: E0317 17:58:52.090336 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:53.049655 kubelet[2750]: E0317 17:58:53.049517 2750 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:53.073681 containerd[1802]: time="2025-03-17T17:58:53.073630353Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:53.074472 containerd[1802]: time="2025-03-17T17:58:53.073776355Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:53.074472 containerd[1802]: time="2025-03-17T17:58:53.073795356Z" level=info msg="StopPodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:53.074472 containerd[1802]: time="2025-03-17T17:58:53.074319766Z" level=info msg="RemovePodSandbox for \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:53.074472 containerd[1802]: time="2025-03-17T17:58:53.074356267Z" level=info msg="Forcibly stopping sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\"" Mar 17 17:58:53.074858 containerd[1802]: time="2025-03-17T17:58:53.074450468Z" level=info msg="TearDown network for sandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" successfully" Mar 17 17:58:53.080100 containerd[1802]: time="2025-03-17T17:58:53.080067875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.080216 containerd[1802]: time="2025-03-17T17:58:53.080116276Z" level=info msg="RemovePodSandbox \"c59516de32cdb4da569c36b2fa2fecb8793673ccd9976a70283ef1855507bb9b\" returns successfully" Mar 17 17:58:53.080514 containerd[1802]: time="2025-03-17T17:58:53.080480483Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:53.080612 containerd[1802]: time="2025-03-17T17:58:53.080593585Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:53.080670 containerd[1802]: time="2025-03-17T17:58:53.080613386Z" level=info msg="StopPodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:53.080931 containerd[1802]: time="2025-03-17T17:58:53.080900991Z" level=info msg="RemovePodSandbox for \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:53.080931 containerd[1802]: time="2025-03-17T17:58:53.080925292Z" level=info msg="Forcibly stopping sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\"" Mar 17 17:58:53.081052 containerd[1802]: time="2025-03-17T17:58:53.080996893Z" level=info msg="TearDown network for sandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" successfully" Mar 17 17:58:53.086713 containerd[1802]: time="2025-03-17T17:58:53.086686298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.086802 containerd[1802]: time="2025-03-17T17:58:53.086725799Z" level=info msg="RemovePodSandbox \"c2aef4b2ed80018364343b9bced4f2c9c86edff8d5beaefb9d0a2bfd0762ce31\" returns successfully" Mar 17 17:58:53.087105 containerd[1802]: time="2025-03-17T17:58:53.087053905Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:53.087196 containerd[1802]: time="2025-03-17T17:58:53.087146507Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:53.087196 containerd[1802]: time="2025-03-17T17:58:53.087161907Z" level=info msg="StopPodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:53.087523 containerd[1802]: time="2025-03-17T17:58:53.087494513Z" level=info msg="RemovePodSandbox for \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:53.087624 containerd[1802]: time="2025-03-17T17:58:53.087528614Z" level=info msg="Forcibly stopping sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\"" Mar 17 17:58:53.087695 containerd[1802]: time="2025-03-17T17:58:53.087628716Z" level=info msg="TearDown network for sandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" successfully" Mar 17 17:58:53.090783 kubelet[2750]: E0317 17:58:53.090758 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:53.093657 containerd[1802]: time="2025-03-17T17:58:53.093632526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.093775 containerd[1802]: time="2025-03-17T17:58:53.093668826Z" level=info msg="RemovePodSandbox \"253511c535d0190821e6e80ec3cb9d08f58b3ae2296f80ebdbc4c349db7df6a7\" returns successfully" Mar 17 17:58:53.093989 containerd[1802]: time="2025-03-17T17:58:53.093955132Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:53.094109 containerd[1802]: time="2025-03-17T17:58:53.094047333Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:53.094176 containerd[1802]: time="2025-03-17T17:58:53.094118435Z" level=info msg="StopPodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:53.094393 containerd[1802]: time="2025-03-17T17:58:53.094368539Z" level=info msg="RemovePodSandbox for \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:53.094454 containerd[1802]: time="2025-03-17T17:58:53.094399140Z" level=info msg="Forcibly stopping sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\"" Mar 17 17:58:53.094512 containerd[1802]: time="2025-03-17T17:58:53.094468841Z" level=info msg="TearDown network for sandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" successfully" Mar 17 17:58:53.105523 containerd[1802]: time="2025-03-17T17:58:53.105464743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.105523 containerd[1802]: time="2025-03-17T17:58:53.105508944Z" level=info msg="RemovePodSandbox \"936646193290a3005e77086b8c6acb1680f45935a9a3a98ed0d6cbff8fd087bf\" returns successfully" Mar 17 17:58:53.107869 containerd[1802]: time="2025-03-17T17:58:53.107843086Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:53.107961 containerd[1802]: time="2025-03-17T17:58:53.107940588Z" level=info msg="TearDown network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" successfully" Mar 17 17:58:53.107961 containerd[1802]: time="2025-03-17T17:58:53.107955388Z" level=info msg="StopPodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" returns successfully" Mar 17 17:58:53.109067 containerd[1802]: time="2025-03-17T17:58:53.108480998Z" level=info msg="RemovePodSandbox for \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:53.109067 containerd[1802]: time="2025-03-17T17:58:53.108510999Z" level=info msg="Forcibly stopping sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\"" Mar 17 17:58:53.109067 containerd[1802]: time="2025-03-17T17:58:53.108604500Z" level=info msg="TearDown network for sandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" successfully" Mar 17 17:58:53.116495 containerd[1802]: time="2025-03-17T17:58:53.116466545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.116596 containerd[1802]: time="2025-03-17T17:58:53.116512145Z" level=info msg="RemovePodSandbox \"e6a70ed1aa81f94017dbaaa2ebb7b003605cbc6b361218f140545387da15b506\" returns successfully" Mar 17 17:58:53.116864 containerd[1802]: time="2025-03-17T17:58:53.116843452Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" Mar 17 17:58:53.116991 containerd[1802]: time="2025-03-17T17:58:53.116968554Z" level=info msg="TearDown network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" successfully" Mar 17 17:58:53.116991 containerd[1802]: time="2025-03-17T17:58:53.116986354Z" level=info msg="StopPodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" returns successfully" Mar 17 17:58:53.117321 containerd[1802]: time="2025-03-17T17:58:53.117245659Z" level=info msg="RemovePodSandbox for \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" Mar 17 17:58:53.117321 containerd[1802]: time="2025-03-17T17:58:53.117271459Z" level=info msg="Forcibly stopping sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\"" Mar 17 17:58:53.117477 containerd[1802]: time="2025-03-17T17:58:53.117343761Z" level=info msg="TearDown network for sandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" successfully" Mar 17 17:58:53.124710 containerd[1802]: time="2025-03-17T17:58:53.124684795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.124805 containerd[1802]: time="2025-03-17T17:58:53.124720296Z" level=info msg="RemovePodSandbox \"a61fffefda39938a9c556c635981d9546894f80dfce3c1037f8daa8c069d5d81\" returns successfully" Mar 17 17:58:53.125136 containerd[1802]: time="2025-03-17T17:58:53.125060502Z" level=info msg="StopPodSandbox for \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\"" Mar 17 17:58:53.125214 containerd[1802]: time="2025-03-17T17:58:53.125155304Z" level=info msg="TearDown network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" successfully" Mar 17 17:58:53.125214 containerd[1802]: time="2025-03-17T17:58:53.125169804Z" level=info msg="StopPodSandbox for \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" returns successfully" Mar 17 17:58:53.125504 containerd[1802]: time="2025-03-17T17:58:53.125418009Z" level=info msg="RemovePodSandbox for \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\"" Mar 17 17:58:53.125504 containerd[1802]: time="2025-03-17T17:58:53.125443709Z" level=info msg="Forcibly stopping sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\"" Mar 17 17:58:53.125671 containerd[1802]: time="2025-03-17T17:58:53.125516011Z" level=info msg="TearDown network for sandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" successfully" Mar 17 17:58:53.130973 containerd[1802]: time="2025-03-17T17:58:53.130947210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.131061 containerd[1802]: time="2025-03-17T17:58:53.130988411Z" level=info msg="RemovePodSandbox \"912cf17ba11bd6045947d496fb09d74161ac13ce283b50a1d70f8365e282aa4c\" returns successfully" Mar 17 17:58:53.131333 containerd[1802]: time="2025-03-17T17:58:53.131300317Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:53.131422 containerd[1802]: time="2025-03-17T17:58:53.131393818Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:53.131422 containerd[1802]: time="2025-03-17T17:58:53.131412719Z" level=info msg="StopPodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:53.131750 containerd[1802]: time="2025-03-17T17:58:53.131655823Z" level=info msg="RemovePodSandbox for \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:53.131750 containerd[1802]: time="2025-03-17T17:58:53.131682224Z" level=info msg="Forcibly stopping sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\"" Mar 17 17:58:53.131889 containerd[1802]: time="2025-03-17T17:58:53.131761525Z" level=info msg="TearDown network for sandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" successfully" Mar 17 17:58:53.138681 containerd[1802]: time="2025-03-17T17:58:53.138637151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.138789 containerd[1802]: time="2025-03-17T17:58:53.138685452Z" level=info msg="RemovePodSandbox \"2ed12f033f8e7fffd0681354f2c830b129161b61b99a2f919be63a2554532137\" returns successfully" Mar 17 17:58:53.139190 containerd[1802]: time="2025-03-17T17:58:53.138980758Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:53.139190 containerd[1802]: time="2025-03-17T17:58:53.139127760Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:53.139190 containerd[1802]: time="2025-03-17T17:58:53.139142361Z" level=info msg="StopPodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:53.139463 containerd[1802]: time="2025-03-17T17:58:53.139438966Z" level=info msg="RemovePodSandbox for \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:53.139524 containerd[1802]: time="2025-03-17T17:58:53.139470767Z" level=info msg="Forcibly stopping sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\"" Mar 17 17:58:53.139675 containerd[1802]: time="2025-03-17T17:58:53.139539868Z" level=info msg="TearDown network for sandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" successfully" Mar 17 17:58:53.144277 containerd[1802]: time="2025-03-17T17:58:53.144252654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.144392 containerd[1802]: time="2025-03-17T17:58:53.144289955Z" level=info msg="RemovePodSandbox \"72046f0e36d03417193b51b113b10097e72f3c98da4f426664b50ab6ab76b830\" returns successfully" Mar 17 17:58:53.144656 containerd[1802]: time="2025-03-17T17:58:53.144624161Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:53.144742 containerd[1802]: time="2025-03-17T17:58:53.144712463Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:53.144742 containerd[1802]: time="2025-03-17T17:58:53.144726563Z" level=info msg="StopPodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:53.145019 containerd[1802]: time="2025-03-17T17:58:53.144951167Z" level=info msg="RemovePodSandbox for \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:53.145019 containerd[1802]: time="2025-03-17T17:58:53.144982968Z" level=info msg="Forcibly stopping sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\"" Mar 17 17:58:53.145138 containerd[1802]: time="2025-03-17T17:58:53.145084970Z" level=info msg="TearDown network for sandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" successfully" Mar 17 17:58:53.157269 containerd[1802]: time="2025-03-17T17:58:53.157233192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.157394 containerd[1802]: time="2025-03-17T17:58:53.157286193Z" level=info msg="RemovePodSandbox \"7be83d0e4f570f407136b16785c8767470a019f8fcef09ad709dacff818da73b\" returns successfully" Mar 17 17:58:53.157691 containerd[1802]: time="2025-03-17T17:58:53.157666400Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:53.157793 containerd[1802]: time="2025-03-17T17:58:53.157771102Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:53.157793 containerd[1802]: time="2025-03-17T17:58:53.157786903Z" level=info msg="StopPodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:53.158601 containerd[1802]: time="2025-03-17T17:58:53.158099408Z" level=info msg="RemovePodSandbox for \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:53.158601 containerd[1802]: time="2025-03-17T17:58:53.158127409Z" level=info msg="Forcibly stopping sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\"" Mar 17 17:58:53.158601 containerd[1802]: time="2025-03-17T17:58:53.158179310Z" level=info msg="TearDown network for sandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" successfully" Mar 17 17:58:53.166738 containerd[1802]: time="2025-03-17T17:58:53.166710166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.166828 containerd[1802]: time="2025-03-17T17:58:53.166757767Z" level=info msg="RemovePodSandbox \"f0da53ab0f88801e3bae6b4db09f0e4ff30179272698310a76ef887e74623562\" returns successfully" Mar 17 17:58:53.167264 containerd[1802]: time="2025-03-17T17:58:53.167104674Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:53.167264 containerd[1802]: time="2025-03-17T17:58:53.167196175Z" level=info msg="TearDown network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" successfully" Mar 17 17:58:53.167264 containerd[1802]: time="2025-03-17T17:58:53.167208475Z" level=info msg="StopPodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" returns successfully" Mar 17 17:58:53.167518 containerd[1802]: time="2025-03-17T17:58:53.167495381Z" level=info msg="RemovePodSandbox for \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:53.167606 containerd[1802]: time="2025-03-17T17:58:53.167522881Z" level=info msg="Forcibly stopping sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\"" Mar 17 17:58:53.167656 containerd[1802]: time="2025-03-17T17:58:53.167613483Z" level=info msg="TearDown network for sandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" successfully" Mar 17 17:58:53.175487 containerd[1802]: time="2025-03-17T17:58:53.175460727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.175563 containerd[1802]: time="2025-03-17T17:58:53.175501928Z" level=info msg="RemovePodSandbox \"a9a856961f1ab94189953ba041d265c94c8790fd541fb5121fc156a3fda2332b\" returns successfully" Mar 17 17:58:53.175900 containerd[1802]: time="2025-03-17T17:58:53.175855034Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" Mar 17 17:58:53.176254 containerd[1802]: time="2025-03-17T17:58:53.175948436Z" level=info msg="TearDown network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" successfully" Mar 17 17:58:53.176254 containerd[1802]: time="2025-03-17T17:58:53.176014637Z" level=info msg="StopPodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" returns successfully" Mar 17 17:58:53.177688 containerd[1802]: time="2025-03-17T17:58:53.176615648Z" level=info msg="RemovePodSandbox for \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" Mar 17 17:58:53.177688 containerd[1802]: time="2025-03-17T17:58:53.176640649Z" level=info msg="Forcibly stopping sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\"" Mar 17 17:58:53.177688 containerd[1802]: time="2025-03-17T17:58:53.176718250Z" level=info msg="TearDown network for sandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" successfully" Mar 17 17:58:53.184921 containerd[1802]: time="2025-03-17T17:58:53.184726497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.184921 containerd[1802]: time="2025-03-17T17:58:53.184785198Z" level=info msg="RemovePodSandbox \"d8f2052c2e3c73f61db629184943f3e4817d72a3fc14f20ff39919ee538fb6ff\" returns successfully" Mar 17 17:58:53.186732 containerd[1802]: time="2025-03-17T17:58:53.185507911Z" level=info msg="StopPodSandbox for \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\"" Mar 17 17:58:53.187295 containerd[1802]: time="2025-03-17T17:58:53.187243943Z" level=info msg="TearDown network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" successfully" Mar 17 17:58:53.187295 containerd[1802]: time="2025-03-17T17:58:53.187290344Z" level=info msg="StopPodSandbox for \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" returns successfully" Mar 17 17:58:53.190368 containerd[1802]: time="2025-03-17T17:58:53.190324800Z" level=info msg="RemovePodSandbox for \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\"" Mar 17 17:58:53.190436 containerd[1802]: time="2025-03-17T17:58:53.190379901Z" level=info msg="Forcibly stopping sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\"" Mar 17 17:58:53.190510 containerd[1802]: time="2025-03-17T17:58:53.190466602Z" level=info msg="TearDown network for sandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" successfully" Mar 17 17:58:53.200741 containerd[1802]: time="2025-03-17T17:58:53.200708990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:53.200853 containerd[1802]: time="2025-03-17T17:58:53.200755891Z" level=info msg="RemovePodSandbox \"72f482c4e5963be043e4038c6e21cc075b04272ee059d1f1d890ed9ed8616f64\" returns successfully" Mar 17 17:58:54.091017 kubelet[2750]: E0317 17:58:54.090958 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:55.091707 kubelet[2750]: E0317 17:58:55.091649 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:56.092431 kubelet[2750]: E0317 17:58:56.092355 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:57.093001 kubelet[2750]: E0317 17:58:57.092931 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:58.093758 kubelet[2750]: E0317 17:58:58.093692 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:58:59.094306 kubelet[2750]: E0317 17:58:59.094247 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:00.094924 kubelet[2750]: E0317 17:59:00.094864 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:01.095195 kubelet[2750]: E0317 17:59:01.095089 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:02.095405 kubelet[2750]: E0317 17:59:02.095354 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:03.095991 kubelet[2750]: E0317 17:59:03.095914 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:03.664236 kubelet[2750]: I0317 17:59:03.664176 2750 topology_manager.go:215] "Topology Admit Handler" podUID="865d2af9-adc0-4cc5-85ee-9561cabc73f1" podNamespace="default" podName="test-pod-1" Mar 17 17:59:03.716349 kubelet[2750]: I0317 17:59:03.716311 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36a18508-1966-4b39-ae7b-636dc5569468\" (UniqueName: \"kubernetes.io/nfs/865d2af9-adc0-4cc5-85ee-9561cabc73f1-pvc-36a18508-1966-4b39-ae7b-636dc5569468\") pod \"test-pod-1\" (UID: \"865d2af9-adc0-4cc5-85ee-9561cabc73f1\") " pod="default/test-pod-1" Mar 17 17:59:03.716349 kubelet[2750]: I0317 17:59:03.716356 2750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d92h\" (UniqueName: \"kubernetes.io/projected/865d2af9-adc0-4cc5-85ee-9561cabc73f1-kube-api-access-2d92h\") pod \"test-pod-1\" (UID: \"865d2af9-adc0-4cc5-85ee-9561cabc73f1\") " pod="default/test-pod-1" Mar 17 17:59:03.865603 kernel: FS-Cache: Loaded Mar 17 17:59:03.941403 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:59:03.941542 kernel: RPC: Registered udp transport module. Mar 17 17:59:03.941588 kernel: RPC: Registered tcp transport module. Mar 17 17:59:03.944851 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:59:03.944936 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:59:04.097222 kubelet[2750]: E0317 17:59:04.097136 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:04.251372 kernel: NFS: Registering the id_resolver key type Mar 17 17:59:04.251515 kernel: Key type id_resolver registered Mar 17 17:59:04.251537 kernel: Key type id_legacy registered Mar 17 17:59:04.324092 nfsidmap[4479]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-a-99edcdcd5a' Mar 17 17:59:04.345412 nfsidmap[4480]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-a-99edcdcd5a' Mar 17 17:59:04.569002 containerd[1802]: time="2025-03-17T17:59:04.568950776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:865d2af9-adc0-4cc5-85ee-9561cabc73f1,Namespace:default,Attempt:0,}" Mar 17 17:59:04.697198 systemd-networkd[1363]: cali5ec59c6bf6e: Link UP Mar 17 17:59:04.697420 systemd-networkd[1363]: cali5ec59c6bf6e: Gained carrier Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.630 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.34-k8s-test--pod--1-eth0 default 865d2af9-adc0-4cc5-85ee-9561cabc73f1 1417 0 2025-03-17 17:58:34 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.34 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.630 [INFO][4481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.654 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" HandleID="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Workload="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.664 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" HandleID="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Workload="10.200.8.34-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b10), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.34", "pod":"test-pod-1", "timestamp":"2025-03-17 17:59:04.65430231 +0000 UTC"}, Hostname:"10.200.8.34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.664 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.664 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.664 [INFO][4493] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.34' Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.665 [INFO][4493] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.669 [INFO][4493] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.672 [INFO][4493] ipam/ipam.go 489: Trying affinity for 192.168.45.128/26 host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.674 [INFO][4493] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.675 [INFO][4493] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.676 [INFO][4493] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.679 [INFO][4493] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67 Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.683 [INFO][4493] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.691 [INFO][4493] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.132/26] block=192.168.45.128/26 handle="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.691 [INFO][4493] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.132/26] handle="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" host="10.200.8.34" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.691 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.691 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.132/26] IPv6=[] ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" HandleID="k8s-pod-network.78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Workload="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.709008 containerd[1802]: 2025-03-17 17:59:04.693 [INFO][4481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"865d2af9-adc0-4cc5-85ee-9561cabc73f1", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:59:04.710050 containerd[1802]: 2025-03-17 17:59:04.693 [INFO][4481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.132/32] ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.710050 containerd[1802]: 2025-03-17 17:59:04.693 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.710050 containerd[1802]: 2025-03-17 17:59:04.695 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.710050 containerd[1802]: 2025-03-17 17:59:04.695 [INFO][4481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.34-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"865d2af9-adc0-4cc5-85ee-9561cabc73f1", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.34", ContainerID:"78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:fb:ea:d9:b1:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:59:04.710050 containerd[1802]: 2025-03-17 17:59:04.705 [INFO][4481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.34-k8s-test--pod--1-eth0" Mar 17 17:59:04.734859 containerd[1802]: time="2025-03-17T17:59:04.734718061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:04.735158 containerd[1802]: time="2025-03-17T17:59:04.734926565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:04.735158 containerd[1802]: time="2025-03-17T17:59:04.734971265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:04.735158 containerd[1802]: time="2025-03-17T17:59:04.735117668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:04.784900 containerd[1802]: time="2025-03-17T17:59:04.784866604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:865d2af9-adc0-4cc5-85ee-9561cabc73f1,Namespace:default,Attempt:0,} returns sandbox id \"78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67\"" Mar 17 17:59:04.786676 containerd[1802]: time="2025-03-17T17:59:04.786643033Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:59:05.098483 kubelet[2750]: E0317 17:59:05.098315 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:05.273351 containerd[1802]: time="2025-03-17T17:59:05.273297409Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:05.275841 containerd[1802]: time="2025-03-17T17:59:05.275776151Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:59:05.280589 containerd[1802]: time="2025-03-17T17:59:05.278998805Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 492.318971ms" Mar 17 17:59:05.280589 containerd[1802]: time="2025-03-17T17:59:05.279036306Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:59:05.285441 containerd[1802]: time="2025-03-17T17:59:05.285412913Z" level=info msg="CreateContainer within sandbox \"78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:59:05.314354 containerd[1802]: time="2025-03-17T17:59:05.314302098Z" level=info msg="CreateContainer within sandbox \"78d138d58730fa58b5bb2b40745c4088328b5b5c01be1e28492d02bd1641cc67\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"708eea9d8f6f387d38e1873908bc4de6a810aa8fe25e81db22c27934e2c31fae\"" Mar 17 17:59:05.315053 containerd[1802]: time="2025-03-17T17:59:05.314942809Z" level=info msg="StartContainer for \"708eea9d8f6f387d38e1873908bc4de6a810aa8fe25e81db22c27934e2c31fae\"" Mar 17 17:59:05.371987 containerd[1802]: time="2025-03-17T17:59:05.371037851Z" level=info msg="StartContainer for \"708eea9d8f6f387d38e1873908bc4de6a810aa8fe25e81db22c27934e2c31fae\" returns successfully" Mar 17 17:59:05.745853 systemd-networkd[1363]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:59:06.099154 kubelet[2750]: E0317 17:59:06.099002 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:06.387403 kubelet[2750]: I0317 17:59:06.387245 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.89121089 podStartE2EDuration="32.387224523s" podCreationTimestamp="2025-03-17 17:58:34 +0000 UTC" firstStartedPulling="2025-03-17 17:59:04.786359829 +0000 UTC m=+72.545577858" lastFinishedPulling="2025-03-17 17:59:05.282373462 +0000 UTC m=+73.041591491" observedRunningTime="2025-03-17 17:59:06.387209723 +0000 UTC m=+74.146427752" watchObservedRunningTime="2025-03-17 17:59:06.387224523 +0000 UTC m=+74.146442552" Mar 17 17:59:07.099374 kubelet[2750]: E0317 17:59:07.099287 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:08.100003 kubelet[2750]: E0317 17:59:08.099935 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:09.100285 kubelet[2750]: E0317 17:59:09.100149 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:59:10.101143 kubelet[2750]: E0317 17:59:10.101081 2750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"