Feb 13 19:56:48.113328 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:56:48.113359 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.113369 kernel: BIOS-provided physical RAM map: Feb 13 19:56:48.113378 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:56:48.113384 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 19:56:48.113390 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 19:56:48.113400 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 19:56:48.113407 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 19:56:48.115705 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 19:56:48.115716 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 19:56:48.115722 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 19:56:48.115729 kernel: printk: bootconsole [earlyser0] enabled Feb 13 19:56:48.115738 kernel: NX (Execute Disable) protection: active Feb 13 19:56:48.115745 kernel: APIC: Static calls initialized Feb 13 19:56:48.115760 kernel: efi: EFI v2.7 by Microsoft Feb 13 19:56:48.115771 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Feb 13 19:56:48.115778 kernel: random: crng init done Feb 13 19:56:48.115788 kernel: secureboot: Secure boot disabled Feb 13 19:56:48.115796 kernel: SMBIOS 3.1.0 present. Feb 13 19:56:48.115805 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 19:56:48.115814 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 19:56:48.115823 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 19:56:48.115830 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 19:56:48.115840 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 19:56:48.115850 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 19:56:48.115858 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 19:56:48.115867 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 19:56:48.115874 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 19:56:48.115885 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 19:56:48.115893 kernel: tsc: Detected 2593.907 MHz processor Feb 13 19:56:48.115901 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:56:48.115912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:56:48.115919 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 19:56:48.115930 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:56:48.115939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:56:48.115947 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 19:56:48.115954 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 19:56:48.115964 kernel: Using GB pages for direct mapping Feb 13 19:56:48.115971 kernel: ACPI: Early table checksum verification disabled Feb 13 19:56:48.115980 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 19:56:48.115993 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116006 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116014 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 19:56:48.116022 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 19:56:48.116032 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116040 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116050 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116061 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116070 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116080 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116087 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116095 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 19:56:48.116102 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 19:56:48.116113 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 19:56:48.116120 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 19:56:48.116129 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 19:56:48.116141 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 19:56:48.116150 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 19:56:48.116159 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 19:56:48.116167 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 19:56:48.116176 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 19:56:48.116185 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:56:48.116193 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:56:48.116203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 19:56:48.116211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 19:56:48.116224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 19:56:48.116233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 19:56:48.116241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 19:56:48.116251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 19:56:48.116258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 19:56:48.116268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 19:56:48.116277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 19:56:48.116284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 19:56:48.116298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 19:56:48.116305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 19:56:48.116315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 19:56:48.116323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 19:56:48.116331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 19:56:48.116341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 19:56:48.116352 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 19:56:48.116362 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 19:56:48.116370 kernel: Zone ranges: Feb 13 19:56:48.116383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:56:48.116392 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:56:48.116399 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 19:56:48.116418 kernel: Movable zone start for each node Feb 13 19:56:48.116427 kernel: Early memory node ranges Feb 13 19:56:48.116437 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:56:48.116444 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 19:56:48.116454 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 19:56:48.116462 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 19:56:48.116475 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 19:56:48.116484 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:56:48.116491 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:56:48.116502 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 19:56:48.116509 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 19:56:48.116520 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 19:56:48.116528 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:56:48.116535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:56:48.116545 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:56:48.116556 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 19:56:48.116566 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:56:48.116575 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 19:56:48.116582 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 19:56:48.116593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:56:48.116600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:56:48.116608 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:56:48.116618 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:56:48.116626 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:56:48.116638 kernel: Hyper-V: PV spinlocks enabled Feb 13 19:56:48.116646 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:56:48.116656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.116666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:56:48.116673 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:56:48.116683 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:56:48.116691 kernel: Fallback order for Node 0: 0 Feb 13 19:56:48.116699 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 19:56:48.116712 kernel: Policy zone: Normal Feb 13 19:56:48.116732 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:56:48.116740 kernel: software IO TLB: area num 2. Feb 13 19:56:48.116754 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312164K reserved, 0K cma-reserved) Feb 13 19:56:48.116763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:56:48.116771 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:56:48.116781 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:56:48.116789 kernel: Dynamic Preempt: voluntary Feb 13 19:56:48.116798 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:56:48.116808 kernel: rcu: RCU event tracing is enabled. Feb 13 19:56:48.116818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:56:48.116833 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:56:48.116841 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:56:48.116851 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:56:48.116860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:56:48.116868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:56:48.116878 kernel: Using NULL legacy PIC Feb 13 19:56:48.116890 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 19:56:48.116899 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:56:48.116909 kernel: Console: colour dummy device 80x25 Feb 13 19:56:48.116920 kernel: printk: console [tty1] enabled Feb 13 19:56:48.116928 kernel: printk: console [ttyS0] enabled Feb 13 19:56:48.116937 kernel: printk: bootconsole [earlyser0] disabled Feb 13 19:56:48.116947 kernel: ACPI: Core revision 20230628 Feb 13 19:56:48.116955 kernel: Failed to register legacy timer interrupt Feb 13 19:56:48.116966 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:56:48.116977 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 19:56:48.116987 kernel: Hyper-V: Using IPI hypercalls Feb 13 19:56:48.116996 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 19:56:48.117004 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 19:56:48.117015 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 19:56:48.117023 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 19:56:48.117033 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 19:56:48.117042 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 19:56:48.117050 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 19:56:48.117064 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:56:48.117072 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:56:48.117083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:56:48.117091 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:56:48.117099 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:56:48.117110 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:56:48.117118 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:56:48.117129 kernel: RETBleed: Vulnerable Feb 13 19:56:48.117137 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:56:48.117145 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:56:48.117158 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:56:48.117166 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:56:48.117176 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:56:48.117184 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:56:48.117194 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:56:48.117203 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:56:48.117211 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:56:48.117222 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:56:48.117230 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 19:56:48.117238 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 19:56:48.117248 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 19:56:48.117259 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 19:56:48.117270 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:56:48.117277 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:56:48.117285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:56:48.117293 kernel: landlock: Up and running. Feb 13 19:56:48.117301 kernel: SELinux: Initializing. Feb 13 19:56:48.117309 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.117316 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.117324 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:56:48.117333 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117341 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117352 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117360 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:56:48.117369 kernel: signal: max sigframe size: 3632 Feb 13 19:56:48.117379 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:56:48.117388 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:56:48.117398 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:56:48.117407 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:56:48.119622 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:56:48.119635 kernel: .... node #0, CPUs: #1 Feb 13 19:56:48.119652 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 19:56:48.119662 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:56:48.119671 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:56:48.119682 kernel: smpboot: Max logical packages: 1 Feb 13 19:56:48.119691 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 19:56:48.119701 kernel: devtmpfs: initialized Feb 13 19:56:48.119710 kernel: x86/mm: Memory block size: 128MB Feb 13 19:56:48.119718 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 19:56:48.119732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:56:48.119740 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:56:48.119751 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:56:48.119760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:56:48.119768 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:56:48.119779 kernel: audit: type=2000 audit(1739476607.028:1): state=initialized audit_enabled=0 res=1 Feb 13 19:56:48.119787 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:56:48.119797 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:56:48.119806 kernel: cpuidle: using governor menu Feb 13 19:56:48.119817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:56:48.119828 kernel: dca service started, version 1.12.1 Feb 13 19:56:48.119836 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 19:56:48.119845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:56:48.119853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:56:48.119861 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:56:48.119869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:56:48.119878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:56:48.119888 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:56:48.119900 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:56:48.119908 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:56:48.119920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:56:48.119928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:56:48.119938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:56:48.119948 kernel: ACPI: Interpreter enabled Feb 13 19:56:48.119957 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:56:48.119968 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:56:48.119977 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:56:48.119990 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:56:48.120000 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 19:56:48.120009 kernel: iommu: Default domain type: Translated Feb 13 19:56:48.120018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:56:48.120028 kernel: efivars: Registered efivars operations Feb 13 19:56:48.120037 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:56:48.120045 kernel: PCI: System does not support PCI Feb 13 19:56:48.120055 kernel: vgaarb: loaded Feb 13 19:56:48.120064 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 19:56:48.120078 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:56:48.120086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:56:48.120094 kernel: pnp: PnP ACPI init Feb 13 19:56:48.120105 kernel: pnp: PnP ACPI: found 3 devices Feb 13 19:56:48.120114 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:56:48.120123 kernel: NET: Registered PF_INET protocol family Feb 13 19:56:48.120133 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:56:48.120141 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:56:48.120152 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:56:48.120164 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:56:48.120175 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:56:48.120183 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:56:48.120193 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.120203 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.120211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:56:48.120222 kernel: NET: Registered PF_XDP protocol family Feb 13 19:56:48.120230 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:56:48.120241 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:56:48.120253 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Feb 13 19:56:48.120263 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:56:48.120272 kernel: Initialise system trusted keyrings Feb 13 19:56:48.120280 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:56:48.120291 kernel: Key type asymmetric registered Feb 13 19:56:48.120299 kernel: Asymmetric key parser 'x509' registered Feb 13 19:56:48.120308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:56:48.120318 kernel: io scheduler mq-deadline registered Feb 13 19:56:48.120326 kernel: io scheduler kyber registered Feb 13 19:56:48.120339 kernel: io scheduler bfq registered Feb 13 19:56:48.120347 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:56:48.120358 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:56:48.120367 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:56:48.120378 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:56:48.120387 kernel: i8042: PNP: No PS/2 controller found. Feb 13 19:56:48.120548 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 19:56:48.120642 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T19:56:47 UTC (1739476607) Feb 13 19:56:48.120730 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 19:56:48.120744 kernel: intel_pstate: CPU model not supported Feb 13 19:56:48.120752 kernel: efifb: probing for efifb Feb 13 19:56:48.120764 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 19:56:48.120772 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 19:56:48.120781 kernel: efifb: scrolling: redraw Feb 13 19:56:48.120792 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:56:48.120802 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:56:48.120811 kernel: fb0: EFI VGA frame buffer device Feb 13 19:56:48.120826 kernel: pstore: Using crash dump compression: deflate Feb 13 19:56:48.120837 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:56:48.120845 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:56:48.120855 kernel: Segment Routing with IPv6 Feb 13 19:56:48.120866 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:56:48.120875 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:56:48.120885 kernel: Key type dns_resolver registered Feb 13 19:56:48.120894 kernel: IPI shorthand broadcast: enabled Feb 13 19:56:48.120903 kernel: sched_clock: Marking stable (986029400, 48870400)->(1260575000, -225675200) Feb 13 19:56:48.120916 kernel: registered taskstats version 1 Feb 13 19:56:48.120925 kernel: Loading compiled-in X.509 certificates Feb 13 19:56:48.120936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:56:48.120944 kernel: Key type .fscrypt registered Feb 13 19:56:48.120952 kernel: Key type fscrypt-provisioning registered Feb 13 19:56:48.120963 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:56:48.120971 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:56:48.120981 kernel: ima: No architecture policies found Feb 13 19:56:48.120993 kernel: clk: Disabling unused clocks Feb 13 19:56:48.121003 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:56:48.121013 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:56:48.121021 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:56:48.121032 kernel: Run /init as init process Feb 13 19:56:48.121040 kernel: with arguments: Feb 13 19:56:48.121050 kernel: /init Feb 13 19:56:48.121059 kernel: with environment: Feb 13 19:56:48.121067 kernel: HOME=/ Feb 13 19:56:48.121078 kernel: TERM=linux Feb 13 19:56:48.121088 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:56:48.121102 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:56:48.121115 systemd[1]: Detected virtualization microsoft. Feb 13 19:56:48.121125 systemd[1]: Detected architecture x86-64. Feb 13 19:56:48.121137 systemd[1]: Running in initrd. Feb 13 19:56:48.121147 systemd[1]: No hostname configured, using default hostname. Feb 13 19:56:48.121158 systemd[1]: Hostname set to . Feb 13 19:56:48.123668 systemd[1]: Initializing machine ID from random generator. Feb 13 19:56:48.123679 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:56:48.123691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:48.123701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:48.123713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:56:48.123723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:56:48.123733 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:56:48.123744 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:56:48.123760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:56:48.123769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:56:48.123780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:48.123790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:48.123799 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:56:48.123811 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:56:48.123819 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:56:48.123834 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:56:48.123842 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:48.123854 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:48.123863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:56:48.123874 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:56:48.123884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:48.123893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:48.123904 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:48.123915 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:56:48.123927 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:56:48.123939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:56:48.123948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:56:48.123959 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:56:48.123968 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:56:48.123977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:56:48.124016 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 19:56:48.124045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:48.124054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:48.124065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:48.124075 systemd-journald[177]: Journal started Feb 13 19:56:48.124103 systemd-journald[177]: Runtime Journal (/run/log/journal/4a5bf81e3ece4bee96ca67651111d907) is 8.0M, max 158.8M, 150.8M free. Feb 13 19:56:48.132090 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 19:56:48.143442 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:56:48.148827 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:56:48.153782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:48.172823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:48.178363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:56:48.182588 kernel: Bridge firewalling registered Feb 13 19:56:48.182757 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 19:56:48.191627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:56:48.194565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:56:48.195041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:48.200655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:48.218967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:48.231539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:56:48.233947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:48.243333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:48.248262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:48.254793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:48.268712 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:56:48.275657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:56:48.289443 dracut-cmdline[211]: dracut-dracut-053 Feb 13 19:56:48.293139 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.342815 systemd-resolved[212]: Positive Trust Anchors: Feb 13 19:56:48.342837 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:56:48.342903 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:56:48.360851 systemd-resolved[212]: Defaulting to hostname 'linux'. Feb 13 19:56:48.382304 kernel: SCSI subsystem initialized Feb 13 19:56:48.374713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:56:48.377932 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:48.394437 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:56:48.405441 kernel: iscsi: registered transport (tcp) Feb 13 19:56:48.428516 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:56:48.428622 kernel: QLogic iSCSI HBA Driver Feb 13 19:56:48.466067 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:48.474740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:56:48.503439 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:56:48.503530 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:56:48.508506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:56:48.549440 kernel: raid6: avx512x4 gen() 27577 MB/s Feb 13 19:56:48.568428 kernel: raid6: avx512x2 gen() 27687 MB/s Feb 13 19:56:48.587424 kernel: raid6: avx512x1 gen() 27645 MB/s Feb 13 19:56:48.606424 kernel: raid6: avx2x4 gen() 22936 MB/s Feb 13 19:56:48.625425 kernel: raid6: avx2x2 gen() 25044 MB/s Feb 13 19:56:48.645377 kernel: raid6: avx2x1 gen() 22044 MB/s Feb 13 19:56:48.645429 kernel: raid6: using algorithm avx512x2 gen() 27687 MB/s Feb 13 19:56:48.666509 kernel: raid6: .... xor() 30161 MB/s, rmw enabled Feb 13 19:56:48.666609 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:56:48.690448 kernel: xor: automatically using best checksumming function avx Feb 13 19:56:48.833443 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:56:48.844350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:48.855592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:48.881509 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 19:56:48.886124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:48.899594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:56:48.913960 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 19:56:48.942928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:48.956576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:56:48.999171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:49.010645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:56:49.028434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:49.039698 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:49.046062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:49.051467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:56:49.065603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:56:49.092995 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:49.110519 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:56:49.123472 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 19:56:49.139444 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 19:56:49.146437 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 19:56:49.151944 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:56:49.152000 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 19:56:49.170080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:49.192267 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:56:49.192301 kernel: AES CTR mode by8 optimization enabled Feb 13 19:56:49.192319 kernel: PTP clock support registered Feb 13 19:56:49.170320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:49.181079 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:49.184490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:49.184789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:49.199083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:49.215662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:50.025542 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 19:56:50.025587 kernel: hv_vmbus: registering driver hv_utils Feb 13 19:56:50.025607 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 19:56:50.025625 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 19:56:50.025643 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 19:56:50.025661 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 19:56:50.025679 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:56:50.025697 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 19:56:50.025716 kernel: scsi host0: storvsc_host_t Feb 13 19:56:50.025988 kernel: scsi host1: storvsc_host_t Feb 13 19:56:50.026158 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 19:56:50.026196 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 19:56:49.990847 systemd-resolved[212]: Clock change detected. Flushing caches. Feb 13 19:56:50.029641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:50.030812 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:50.043937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:50.058431 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 19:56:50.070570 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 19:56:50.077454 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 19:56:50.086295 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 19:56:50.093969 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:56:50.093997 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 19:56:50.090304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:50.101677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:50.120598 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 19:56:50.141654 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:56:50.141881 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:56:50.142064 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 19:56:50.142242 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 19:56:50.142626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.142656 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:56:50.144560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:50.175435 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: VF slot 1 added Feb 13 19:56:50.188427 kernel: hv_vmbus: registering driver hv_pci Feb 13 19:56:50.188506 kernel: hv_pci 94101dd7-8e46-4c66-89b3-f723565a3a3d: PCI VMBus probing: Using version 0x10004 Feb 13 19:56:50.239332 kernel: hv_pci 94101dd7-8e46-4c66-89b3-f723565a3a3d: PCI host bridge to bus 8e46:00 Feb 13 19:56:50.239887 kernel: pci_bus 8e46:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 19:56:50.240089 kernel: pci_bus 8e46:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 19:56:50.240253 kernel: pci 8e46:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 19:56:50.240491 kernel: pci 8e46:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 19:56:50.240691 kernel: pci 8e46:00:02.0: enabling Extended Tags Feb 13 19:56:50.240865 kernel: pci 8e46:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8e46:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 19:56:50.241038 kernel: pci_bus 8e46:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 19:56:50.241191 kernel: pci 8e46:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 19:56:50.327428 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (446) Feb 13 19:56:50.331252 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 19:56:50.346407 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (442) Feb 13 19:56:50.394483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 19:56:50.418562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:56:50.432850 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 19:56:50.436395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 19:56:50.499412 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:56:50.516103 kernel: mlx5_core 8e46:00:02.0: enabling device (0000 -> 0002) Feb 13 19:56:50.761634 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.761679 kernel: mlx5_core 8e46:00:02.0: firmware version: 14.30.5000 Feb 13 19:56:50.761904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.761923 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: VF registering: eth1 Feb 13 19:56:50.762086 kernel: mlx5_core 8e46:00:02.0 eth1: joined to eth0 Feb 13 19:56:50.762270 kernel: mlx5_core 8e46:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:56:50.773449 kernel: mlx5_core 8e46:00:02.0 enP36422s1: renamed from eth1 Feb 13 19:56:51.539592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:51.540475 disk-uuid[596]: The operation has completed successfully. Feb 13 19:56:51.630668 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:56:51.630822 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:56:51.655586 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:56:51.661872 sh[688]: Success Feb 13 19:56:51.681849 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:56:51.758043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:56:51.769728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:56:51.775982 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:56:51.798407 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:56:51.798475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:51.804511 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:56:51.807464 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:56:51.810064 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:56:51.874972 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:56:51.878567 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:56:51.890668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:56:51.895801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:56:51.918319 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:51.918408 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:51.921271 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:51.932036 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:51.946733 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:51.946187 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:56:51.957285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:56:51.969656 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:56:51.992414 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:52.003658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:56:52.024855 systemd-networkd[872]: lo: Link UP Feb 13 19:56:52.024864 systemd-networkd[872]: lo: Gained carrier Feb 13 19:56:52.029319 systemd-networkd[872]: Enumeration completed Feb 13 19:56:52.030275 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:56:52.034674 systemd[1]: Reached target network.target - Network. Feb 13 19:56:52.034850 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:52.034855 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:56:52.105631 kernel: mlx5_core 8e46:00:02.0 enP36422s1: Link up Feb 13 19:56:52.148807 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: Data path switched to VF: enP36422s1 Feb 13 19:56:52.144704 systemd-networkd[872]: enP36422s1: Link UP Feb 13 19:56:52.144838 systemd-networkd[872]: eth0: Link UP Feb 13 19:56:52.151218 systemd-networkd[872]: eth0: Gained carrier Feb 13 19:56:52.151238 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:52.164437 systemd-networkd[872]: enP36422s1: Gained carrier Feb 13 19:56:52.196479 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 19:56:52.214929 ignition[836]: Ignition 2.20.0 Feb 13 19:56:52.214942 ignition[836]: Stage: fetch-offline Feb 13 19:56:52.216623 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:52.214986 ignition[836]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.214997 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.215113 ignition[836]: parsed url from cmdline: "" Feb 13 19:56:52.215118 ignition[836]: no config URL provided Feb 13 19:56:52.215125 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:56:52.215135 ignition[836]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:56:52.215142 ignition[836]: failed to fetch config: resource requires networking Feb 13 19:56:52.215499 ignition[836]: Ignition finished successfully Feb 13 19:56:52.239730 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:56:52.256977 ignition[883]: Ignition 2.20.0 Feb 13 19:56:52.256989 ignition[883]: Stage: fetch Feb 13 19:56:52.257225 ignition[883]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.257239 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.257347 ignition[883]: parsed url from cmdline: "" Feb 13 19:56:52.257350 ignition[883]: no config URL provided Feb 13 19:56:52.257355 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:56:52.257361 ignition[883]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:56:52.257414 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 19:56:52.340866 ignition[883]: GET result: OK Feb 13 19:56:52.340953 ignition[883]: config has been read from IMDS userdata Feb 13 19:56:52.340975 ignition[883]: parsing config with SHA512: 059ea373429eeb1730519509b43638489a6d7517f07a4003a25cbff0f74c7966fea688908c27a22e178a4eb2901830009fd93c1e39ebe1f0f4265747bdf93ae7 Feb 13 19:56:52.347600 unknown[883]: fetched base config from "system" Feb 13 19:56:52.347772 unknown[883]: fetched base config from "system" Feb 13 19:56:52.348085 ignition[883]: fetch: fetch complete Feb 13 19:56:52.347780 unknown[883]: fetched user config from "azure" Feb 13 19:56:52.348091 ignition[883]: fetch: fetch passed Feb 13 19:56:52.349728 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:56:52.348141 ignition[883]: Ignition finished successfully Feb 13 19:56:52.360891 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:56:52.376615 ignition[889]: Ignition 2.20.0 Feb 13 19:56:52.376626 ignition[889]: Stage: kargs Feb 13 19:56:52.378761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:56:52.376867 ignition[889]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.376880 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.377647 ignition[889]: kargs: kargs passed Feb 13 19:56:52.377697 ignition[889]: Ignition finished successfully Feb 13 19:56:52.392580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:56:52.407248 ignition[895]: Ignition 2.20.0 Feb 13 19:56:52.407260 ignition[895]: Stage: disks Feb 13 19:56:52.409221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:56:52.407520 ignition[895]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.407533 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.408257 ignition[895]: disks: disks passed Feb 13 19:56:52.408300 ignition[895]: Ignition finished successfully Feb 13 19:56:52.424336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:52.427293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:56:52.433465 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:56:52.439123 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:56:52.446838 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:56:52.456561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:56:52.481729 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 19:56:52.486161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:56:52.500547 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:56:52.593646 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:56:52.594363 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:56:52.599165 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:56:52.615538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:52.621060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:56:52.631518 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Feb 13 19:56:52.639853 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:52.639935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:52.640165 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:56:52.646557 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:52.649787 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:56:52.663550 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:52.649830 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:52.655337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:56:52.668399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:56:52.669721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:52.826902 coreos-metadata[916]: Feb 13 19:56:52.826 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:56:52.833235 coreos-metadata[916]: Feb 13 19:56:52.833 INFO Fetch successful Feb 13 19:56:52.836395 coreos-metadata[916]: Feb 13 19:56:52.833 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:56:52.844000 coreos-metadata[916]: Feb 13 19:56:52.843 INFO Fetch successful Feb 13 19:56:52.848890 coreos-metadata[916]: Feb 13 19:56:52.847 INFO wrote hostname ci-4186.1.1-a-5a2e75f9ad to /sysroot/etc/hostname Feb 13 19:56:52.851170 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:56:52.864076 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:56:52.882262 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:56:52.887597 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:56:52.898190 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:56:53.132448 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:53.144542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:56:53.151597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:56:53.161476 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:53.160634 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:56:53.195608 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:56:53.199499 ignition[1033]: INFO : Ignition 2.20.0 Feb 13 19:56:53.203372 ignition[1033]: INFO : Stage: mount Feb 13 19:56:53.203372 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:53.203372 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:53.203372 ignition[1033]: INFO : mount: mount passed Feb 13 19:56:53.203372 ignition[1033]: INFO : Ignition finished successfully Feb 13 19:56:53.213760 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:56:53.226498 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:56:53.235050 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:53.258411 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Feb 13 19:56:53.262401 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:53.262449 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:53.267446 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:53.273404 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:53.274812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:53.299712 ignition[1062]: INFO : Ignition 2.20.0 Feb 13 19:56:53.299712 ignition[1062]: INFO : Stage: files Feb 13 19:56:53.304000 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:53.304000 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:53.310100 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:56:53.319472 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:56:53.319472 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:56:53.337745 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:56:53.341760 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:56:53.341760 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:56:53.338238 unknown[1062]: wrote ssh authorized keys file for user: core Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:56:53.519717 systemd-networkd[872]: enP36422s1: Gained IPv6LL Feb 13 19:56:53.902034 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:56:53.967585 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 19:56:54.220692 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:54.226611 ignition[1062]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:54.231285 ignition[1062]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:54.231285 ignition[1062]: INFO : files: files passed Feb 13 19:56:54.231285 ignition[1062]: INFO : Ignition finished successfully Feb 13 19:56:54.241469 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:56:54.250585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:56:54.257416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:56:54.260640 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:56:54.262437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:56:54.277795 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.277795 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.289305 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.280246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:54.285930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:56:54.307677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:56:54.345903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:56:54.346026 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:56:54.352096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:56:54.360393 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:56:54.365452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:56:54.378691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:56:54.393673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:54.406584 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:56:54.423117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:54.424270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:54.425149 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:56:54.425598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:56:54.425745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:54.426438 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:56:54.426854 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:56:54.427354 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:56:54.428189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:54.428633 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:54.429033 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:56:54.429648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:54.430066 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:56:54.430479 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:56:54.430890 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:56:54.431332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:56:54.431527 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:54.432242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:54.433391 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:54.433829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:56:54.471628 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:54.524523 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:56:54.524762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:54.530444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:56:54.530623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:54.538365 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:56:54.541246 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:56:54.553966 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:56:54.557141 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:56:54.569637 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:56:54.588082 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:56:54.590681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:56:54.590892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:54.594476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:56:54.594643 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:54.610933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:56:54.611046 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:56:54.621544 ignition[1114]: INFO : Ignition 2.20.0 Feb 13 19:56:54.621544 ignition[1114]: INFO : Stage: umount Feb 13 19:56:54.621544 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:54.621544 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:54.621544 ignition[1114]: INFO : umount: umount passed Feb 13 19:56:54.621544 ignition[1114]: INFO : Ignition finished successfully Feb 13 19:56:54.624628 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:56:54.624736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:56:54.628988 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:56:54.629098 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:56:54.633549 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:56:54.633606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:56:54.636804 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:56:54.636852 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:56:54.637154 systemd[1]: Stopped target network.target - Network. Feb 13 19:56:54.637570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:56:54.637615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:54.638429 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:56:54.638926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:56:54.686273 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:54.693652 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:56:54.696002 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:56:54.700551 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:56:54.702868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:54.709756 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:56:54.709820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:54.714576 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:56:54.714649 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:56:54.719691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:56:54.719750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:54.725063 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:56:54.729761 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:56:54.735940 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:56:54.743917 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 19:56:54.745887 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:56:54.746035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:56:54.750793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:56:54.750882 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:54.768565 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:56:54.773306 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:56:54.773402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:54.777130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:54.784088 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:56:54.784201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:56:54.795110 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:56:54.795297 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:54.803953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:56:54.804046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:54.813237 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:56:54.815905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:54.818795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:56:54.818850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:54.831120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:56:54.831191 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:54.836125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:54.836180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:54.852579 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: Data path switched from VF: enP36422s1 Feb 13 19:56:54.851947 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:56:54.852911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:56:54.852973 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:54.853361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:56:54.853408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:54.853764 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:56:54.853799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:54.854186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:56:54.854222 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:54.890742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:56:54.890843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:54.894288 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:56:54.894344 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:54.899918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:54.899992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:54.918065 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:56:54.918228 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:56:54.927835 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:56:54.927958 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:56:55.005953 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:56:55.006155 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:56:55.012104 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:56:55.019571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:56:55.019663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:55.032682 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:56:55.053278 systemd[1]: Switching root. Feb 13 19:56:55.094783 systemd-journald[177]: Journal stopped Feb 13 19:56:48.113328 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:56:48.113359 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.113369 kernel: BIOS-provided physical RAM map: Feb 13 19:56:48.113378 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:56:48.113384 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 19:56:48.113390 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 19:56:48.113400 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 19:56:48.113407 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 19:56:48.115705 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 19:56:48.115716 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 19:56:48.115722 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 19:56:48.115729 kernel: printk: bootconsole [earlyser0] enabled Feb 13 19:56:48.115738 kernel: NX (Execute Disable) protection: active Feb 13 19:56:48.115745 kernel: APIC: Static calls initialized Feb 13 19:56:48.115760 kernel: efi: EFI v2.7 by Microsoft Feb 13 19:56:48.115771 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Feb 13 19:56:48.115778 kernel: random: crng init done Feb 13 19:56:48.115788 kernel: secureboot: Secure boot disabled Feb 13 19:56:48.115796 kernel: SMBIOS 3.1.0 present. Feb 13 19:56:48.115805 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 19:56:48.115814 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 19:56:48.115823 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 19:56:48.115830 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 19:56:48.115840 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 19:56:48.115850 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 19:56:48.115858 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 19:56:48.115867 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 19:56:48.115874 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 19:56:48.115885 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 19:56:48.115893 kernel: tsc: Detected 2593.907 MHz processor Feb 13 19:56:48.115901 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:56:48.115912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:56:48.115919 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 19:56:48.115930 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:56:48.115939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:56:48.115947 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 19:56:48.115954 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 19:56:48.115964 kernel: Using GB pages for direct mapping Feb 13 19:56:48.115971 kernel: ACPI: Early table checksum verification disabled Feb 13 19:56:48.115980 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 19:56:48.115993 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116006 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116014 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 19:56:48.116022 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 19:56:48.116032 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116040 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116050 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116061 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116070 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116080 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116087 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:56:48.116095 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 19:56:48.116102 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 19:56:48.116113 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 19:56:48.116120 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 19:56:48.116129 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 19:56:48.116141 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 19:56:48.116150 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 19:56:48.116159 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 19:56:48.116167 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 19:56:48.116176 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 19:56:48.116185 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:56:48.116193 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:56:48.116203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 19:56:48.116211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 19:56:48.116224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 19:56:48.116233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 19:56:48.116241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 19:56:48.116251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 19:56:48.116258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 19:56:48.116268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 19:56:48.116277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 19:56:48.116284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 19:56:48.116298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 19:56:48.116305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 19:56:48.116315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 19:56:48.116323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 19:56:48.116331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 19:56:48.116341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 19:56:48.116352 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 19:56:48.116362 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 19:56:48.116370 kernel: Zone ranges: Feb 13 19:56:48.116383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:56:48.116392 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:56:48.116399 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 19:56:48.116418 kernel: Movable zone start for each node Feb 13 19:56:48.116427 kernel: Early memory node ranges Feb 13 19:56:48.116437 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:56:48.116444 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 19:56:48.116454 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 19:56:48.116462 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 19:56:48.116475 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 19:56:48.116484 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:56:48.116491 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:56:48.116502 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 19:56:48.116509 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 19:56:48.116520 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 19:56:48.116528 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:56:48.116535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:56:48.116545 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:56:48.116556 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 19:56:48.116566 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:56:48.116575 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 19:56:48.116582 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 19:56:48.116593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:56:48.116600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:56:48.116608 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:56:48.116618 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:56:48.116626 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:56:48.116638 kernel: Hyper-V: PV spinlocks enabled Feb 13 19:56:48.116646 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:56:48.116656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.116666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:56:48.116673 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:56:48.116683 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:56:48.116691 kernel: Fallback order for Node 0: 0 Feb 13 19:56:48.116699 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 19:56:48.116712 kernel: Policy zone: Normal Feb 13 19:56:48.116732 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:56:48.116740 kernel: software IO TLB: area num 2. Feb 13 19:56:48.116754 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312164K reserved, 0K cma-reserved) Feb 13 19:56:48.116763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:56:48.116771 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:56:48.116781 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:56:48.116789 kernel: Dynamic Preempt: voluntary Feb 13 19:56:48.116798 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:56:48.116808 kernel: rcu: RCU event tracing is enabled. Feb 13 19:56:48.116818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:56:48.116833 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:56:48.116841 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:56:48.116851 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:56:48.116860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:56:48.116868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:56:48.116878 kernel: Using NULL legacy PIC Feb 13 19:56:48.116890 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 19:56:48.116899 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:56:48.116909 kernel: Console: colour dummy device 80x25 Feb 13 19:56:48.116920 kernel: printk: console [tty1] enabled Feb 13 19:56:48.116928 kernel: printk: console [ttyS0] enabled Feb 13 19:56:48.116937 kernel: printk: bootconsole [earlyser0] disabled Feb 13 19:56:48.116947 kernel: ACPI: Core revision 20230628 Feb 13 19:56:48.116955 kernel: Failed to register legacy timer interrupt Feb 13 19:56:48.116966 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:56:48.116977 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 19:56:48.116987 kernel: Hyper-V: Using IPI hypercalls Feb 13 19:56:48.116996 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 19:56:48.117004 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 19:56:48.117015 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 19:56:48.117023 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 19:56:48.117033 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 19:56:48.117042 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 19:56:48.117050 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 19:56:48.117064 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:56:48.117072 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:56:48.117083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:56:48.117091 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:56:48.117099 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:56:48.117110 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:56:48.117118 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:56:48.117129 kernel: RETBleed: Vulnerable Feb 13 19:56:48.117137 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:56:48.117145 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:56:48.117158 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:56:48.117166 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:56:48.117176 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:56:48.117184 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:56:48.117194 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:56:48.117203 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:56:48.117211 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:56:48.117222 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:56:48.117230 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 19:56:48.117238 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 19:56:48.117248 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 19:56:48.117259 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 19:56:48.117270 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:56:48.117277 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:56:48.117285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:56:48.117293 kernel: landlock: Up and running. Feb 13 19:56:48.117301 kernel: SELinux: Initializing. Feb 13 19:56:48.117309 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.117316 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.117324 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:56:48.117333 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117341 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117352 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:56:48.117360 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:56:48.117369 kernel: signal: max sigframe size: 3632 Feb 13 19:56:48.117379 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:56:48.117388 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:56:48.117398 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:56:48.117407 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:56:48.119622 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:56:48.119635 kernel: .... node #0, CPUs: #1 Feb 13 19:56:48.119652 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 19:56:48.119662 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:56:48.119671 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:56:48.119682 kernel: smpboot: Max logical packages: 1 Feb 13 19:56:48.119691 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 19:56:48.119701 kernel: devtmpfs: initialized Feb 13 19:56:48.119710 kernel: x86/mm: Memory block size: 128MB Feb 13 19:56:48.119718 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 19:56:48.119732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:56:48.119740 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:56:48.119751 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:56:48.119760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:56:48.119768 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:56:48.119779 kernel: audit: type=2000 audit(1739476607.028:1): state=initialized audit_enabled=0 res=1 Feb 13 19:56:48.119787 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:56:48.119797 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:56:48.119806 kernel: cpuidle: using governor menu Feb 13 19:56:48.119817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:56:48.119828 kernel: dca service started, version 1.12.1 Feb 13 19:56:48.119836 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 19:56:48.119845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:56:48.119853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:56:48.119861 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:56:48.119869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:56:48.119878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:56:48.119888 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:56:48.119900 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:56:48.119908 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:56:48.119920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:56:48.119928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:56:48.119938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:56:48.119948 kernel: ACPI: Interpreter enabled Feb 13 19:56:48.119957 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:56:48.119968 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:56:48.119977 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:56:48.119990 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:56:48.120000 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 19:56:48.120009 kernel: iommu: Default domain type: Translated Feb 13 19:56:48.120018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:56:48.120028 kernel: efivars: Registered efivars operations Feb 13 19:56:48.120037 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:56:48.120045 kernel: PCI: System does not support PCI Feb 13 19:56:48.120055 kernel: vgaarb: loaded Feb 13 19:56:48.120064 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 19:56:48.120078 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:56:48.120086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:56:48.120094 kernel: pnp: PnP ACPI init Feb 13 19:56:48.120105 kernel: pnp: PnP ACPI: found 3 devices Feb 13 19:56:48.120114 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:56:48.120123 kernel: NET: Registered PF_INET protocol family Feb 13 19:56:48.120133 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:56:48.120141 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:56:48.120152 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:56:48.120164 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:56:48.120175 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:56:48.120183 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:56:48.120193 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.120203 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:56:48.120211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:56:48.120222 kernel: NET: Registered PF_XDP protocol family Feb 13 19:56:48.120230 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:56:48.120241 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:56:48.120253 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Feb 13 19:56:48.120263 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:56:48.120272 kernel: Initialise system trusted keyrings Feb 13 19:56:48.120280 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:56:48.120291 kernel: Key type asymmetric registered Feb 13 19:56:48.120299 kernel: Asymmetric key parser 'x509' registered Feb 13 19:56:48.120308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:56:48.120318 kernel: io scheduler mq-deadline registered Feb 13 19:56:48.120326 kernel: io scheduler kyber registered Feb 13 19:56:48.120339 kernel: io scheduler bfq registered Feb 13 19:56:48.120347 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:56:48.120358 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:56:48.120367 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:56:48.120378 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:56:48.120387 kernel: i8042: PNP: No PS/2 controller found. Feb 13 19:56:48.120548 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 19:56:48.120642 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T19:56:47 UTC (1739476607) Feb 13 19:56:48.120730 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 19:56:48.120744 kernel: intel_pstate: CPU model not supported Feb 13 19:56:48.120752 kernel: efifb: probing for efifb Feb 13 19:56:48.120764 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 19:56:48.120772 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 19:56:48.120781 kernel: efifb: scrolling: redraw Feb 13 19:56:48.120792 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:56:48.120802 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:56:48.120811 kernel: fb0: EFI VGA frame buffer device Feb 13 19:56:48.120826 kernel: pstore: Using crash dump compression: deflate Feb 13 19:56:48.120837 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:56:48.120845 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:56:48.120855 kernel: Segment Routing with IPv6 Feb 13 19:56:48.120866 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:56:48.120875 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:56:48.120885 kernel: Key type dns_resolver registered Feb 13 19:56:48.120894 kernel: IPI shorthand broadcast: enabled Feb 13 19:56:48.120903 kernel: sched_clock: Marking stable (986029400, 48870400)->(1260575000, -225675200) Feb 13 19:56:48.120916 kernel: registered taskstats version 1 Feb 13 19:56:48.120925 kernel: Loading compiled-in X.509 certificates Feb 13 19:56:48.120936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:56:48.120944 kernel: Key type .fscrypt registered Feb 13 19:56:48.120952 kernel: Key type fscrypt-provisioning registered Feb 13 19:56:48.120963 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:56:48.120971 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:56:48.120981 kernel: ima: No architecture policies found Feb 13 19:56:48.120993 kernel: clk: Disabling unused clocks Feb 13 19:56:48.121003 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:56:48.121013 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:56:48.121021 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:56:48.121032 kernel: Run /init as init process Feb 13 19:56:48.121040 kernel: with arguments: Feb 13 19:56:48.121050 kernel: /init Feb 13 19:56:48.121059 kernel: with environment: Feb 13 19:56:48.121067 kernel: HOME=/ Feb 13 19:56:48.121078 kernel: TERM=linux Feb 13 19:56:48.121088 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:56:48.121102 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:56:48.121115 systemd[1]: Detected virtualization microsoft. Feb 13 19:56:48.121125 systemd[1]: Detected architecture x86-64. Feb 13 19:56:48.121137 systemd[1]: Running in initrd. Feb 13 19:56:48.121147 systemd[1]: No hostname configured, using default hostname. Feb 13 19:56:48.121158 systemd[1]: Hostname set to . Feb 13 19:56:48.123668 systemd[1]: Initializing machine ID from random generator. Feb 13 19:56:48.123679 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:56:48.123691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:48.123701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:48.123713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:56:48.123723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:56:48.123733 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:56:48.123744 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:56:48.123760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:56:48.123769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:56:48.123780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:48.123790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:48.123799 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:56:48.123811 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:56:48.123819 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:56:48.123834 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:56:48.123842 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:48.123854 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:48.123863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:56:48.123874 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:56:48.123884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:48.123893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:48.123904 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:48.123915 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:56:48.123927 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:56:48.123939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:56:48.123948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:56:48.123959 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:56:48.123968 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:56:48.123977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:56:48.124016 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 19:56:48.124045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:48.124054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:48.124065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:48.124075 systemd-journald[177]: Journal started Feb 13 19:56:48.124103 systemd-journald[177]: Runtime Journal (/run/log/journal/4a5bf81e3ece4bee96ca67651111d907) is 8.0M, max 158.8M, 150.8M free. Feb 13 19:56:48.132090 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 19:56:48.143442 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:56:48.148827 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:56:48.153782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:48.172823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:48.178363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:56:48.182588 kernel: Bridge firewalling registered Feb 13 19:56:48.182757 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 19:56:48.191627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:56:48.194565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:56:48.195041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:48.200655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:48.218967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:48.231539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:56:48.233947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:48.243333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:48.248262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:48.254793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:48.268712 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:56:48.275657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:56:48.289443 dracut-cmdline[211]: dracut-dracut-053 Feb 13 19:56:48.293139 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:56:48.342815 systemd-resolved[212]: Positive Trust Anchors: Feb 13 19:56:48.342837 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:56:48.342903 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:56:48.360851 systemd-resolved[212]: Defaulting to hostname 'linux'. Feb 13 19:56:48.382304 kernel: SCSI subsystem initialized Feb 13 19:56:48.374713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:56:48.377932 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:48.394437 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:56:48.405441 kernel: iscsi: registered transport (tcp) Feb 13 19:56:48.428516 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:56:48.428622 kernel: QLogic iSCSI HBA Driver Feb 13 19:56:48.466067 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:48.474740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:56:48.503439 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:56:48.503530 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:56:48.508506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:56:48.549440 kernel: raid6: avx512x4 gen() 27577 MB/s Feb 13 19:56:48.568428 kernel: raid6: avx512x2 gen() 27687 MB/s Feb 13 19:56:48.587424 kernel: raid6: avx512x1 gen() 27645 MB/s Feb 13 19:56:48.606424 kernel: raid6: avx2x4 gen() 22936 MB/s Feb 13 19:56:48.625425 kernel: raid6: avx2x2 gen() 25044 MB/s Feb 13 19:56:48.645377 kernel: raid6: avx2x1 gen() 22044 MB/s Feb 13 19:56:48.645429 kernel: raid6: using algorithm avx512x2 gen() 27687 MB/s Feb 13 19:56:48.666509 kernel: raid6: .... xor() 30161 MB/s, rmw enabled Feb 13 19:56:48.666609 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:56:48.690448 kernel: xor: automatically using best checksumming function avx Feb 13 19:56:48.833443 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:56:48.844350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:48.855592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:48.881509 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 19:56:48.886124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:48.899594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:56:48.913960 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 19:56:48.942928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:48.956576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:56:48.999171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:49.010645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:56:49.028434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:49.039698 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:49.046062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:49.051467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:56:49.065603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:56:49.092995 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:49.110519 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:56:49.123472 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 19:56:49.139444 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 19:56:49.146437 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 19:56:49.151944 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:56:49.152000 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 19:56:49.170080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:49.192267 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:56:49.192301 kernel: AES CTR mode by8 optimization enabled Feb 13 19:56:49.192319 kernel: PTP clock support registered Feb 13 19:56:49.170320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:49.181079 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:49.184490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:49.184789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:49.199083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:49.215662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:50.025542 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 19:56:50.025587 kernel: hv_vmbus: registering driver hv_utils Feb 13 19:56:50.025607 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 19:56:50.025625 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 19:56:50.025643 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 19:56:50.025661 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 19:56:50.025679 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:56:50.025697 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 19:56:50.025716 kernel: scsi host0: storvsc_host_t Feb 13 19:56:50.025988 kernel: scsi host1: storvsc_host_t Feb 13 19:56:50.026158 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 19:56:50.026196 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 19:56:49.990847 systemd-resolved[212]: Clock change detected. Flushing caches. Feb 13 19:56:50.029641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:50.030812 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:50.043937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:50.058431 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 19:56:50.070570 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 19:56:50.077454 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 19:56:50.086295 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 19:56:50.093969 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:56:50.093997 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 19:56:50.090304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:50.101677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:50.120598 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 19:56:50.141654 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:56:50.141881 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:56:50.142064 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 19:56:50.142242 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 19:56:50.142626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.142656 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:56:50.144560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:50.175435 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: VF slot 1 added Feb 13 19:56:50.188427 kernel: hv_vmbus: registering driver hv_pci Feb 13 19:56:50.188506 kernel: hv_pci 94101dd7-8e46-4c66-89b3-f723565a3a3d: PCI VMBus probing: Using version 0x10004 Feb 13 19:56:50.239332 kernel: hv_pci 94101dd7-8e46-4c66-89b3-f723565a3a3d: PCI host bridge to bus 8e46:00 Feb 13 19:56:50.239887 kernel: pci_bus 8e46:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 19:56:50.240089 kernel: pci_bus 8e46:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 19:56:50.240253 kernel: pci 8e46:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 19:56:50.240491 kernel: pci 8e46:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 19:56:50.240691 kernel: pci 8e46:00:02.0: enabling Extended Tags Feb 13 19:56:50.240865 kernel: pci 8e46:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8e46:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 19:56:50.241038 kernel: pci_bus 8e46:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 19:56:50.241191 kernel: pci 8e46:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 19:56:50.327428 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (446) Feb 13 19:56:50.331252 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 19:56:50.346407 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (442) Feb 13 19:56:50.394483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 19:56:50.418562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:56:50.432850 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 19:56:50.436395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 19:56:50.499412 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:56:50.516103 kernel: mlx5_core 8e46:00:02.0: enabling device (0000 -> 0002) Feb 13 19:56:50.761634 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.761679 kernel: mlx5_core 8e46:00:02.0: firmware version: 14.30.5000 Feb 13 19:56:50.761904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:50.761923 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: VF registering: eth1 Feb 13 19:56:50.762086 kernel: mlx5_core 8e46:00:02.0 eth1: joined to eth0 Feb 13 19:56:50.762270 kernel: mlx5_core 8e46:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:56:50.773449 kernel: mlx5_core 8e46:00:02.0 enP36422s1: renamed from eth1 Feb 13 19:56:51.539592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:56:51.540475 disk-uuid[596]: The operation has completed successfully. Feb 13 19:56:51.630668 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:56:51.630822 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:56:51.655586 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:56:51.661872 sh[688]: Success Feb 13 19:56:51.681849 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:56:51.758043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:56:51.769728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:56:51.775982 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:56:51.798407 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:56:51.798475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:51.804511 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:56:51.807464 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:56:51.810064 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:56:51.874972 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:56:51.878567 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:56:51.890668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:56:51.895801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:56:51.918319 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:51.918408 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:51.921271 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:51.932036 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:51.946733 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:51.946187 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:56:51.957285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:56:51.969656 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:56:51.992414 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:52.003658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:56:52.024855 systemd-networkd[872]: lo: Link UP Feb 13 19:56:52.024864 systemd-networkd[872]: lo: Gained carrier Feb 13 19:56:52.029319 systemd-networkd[872]: Enumeration completed Feb 13 19:56:52.030275 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:56:52.034674 systemd[1]: Reached target network.target - Network. Feb 13 19:56:52.034850 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:52.034855 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:56:52.105631 kernel: mlx5_core 8e46:00:02.0 enP36422s1: Link up Feb 13 19:56:52.148807 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: Data path switched to VF: enP36422s1 Feb 13 19:56:52.144704 systemd-networkd[872]: enP36422s1: Link UP Feb 13 19:56:52.144838 systemd-networkd[872]: eth0: Link UP Feb 13 19:56:52.151218 systemd-networkd[872]: eth0: Gained carrier Feb 13 19:56:52.151238 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:52.164437 systemd-networkd[872]: enP36422s1: Gained carrier Feb 13 19:56:52.196479 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 19:56:52.214929 ignition[836]: Ignition 2.20.0 Feb 13 19:56:52.214942 ignition[836]: Stage: fetch-offline Feb 13 19:56:52.216623 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:52.214986 ignition[836]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.214997 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.215113 ignition[836]: parsed url from cmdline: "" Feb 13 19:56:52.215118 ignition[836]: no config URL provided Feb 13 19:56:52.215125 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:56:52.215135 ignition[836]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:56:52.215142 ignition[836]: failed to fetch config: resource requires networking Feb 13 19:56:52.215499 ignition[836]: Ignition finished successfully Feb 13 19:56:52.239730 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:56:52.256977 ignition[883]: Ignition 2.20.0 Feb 13 19:56:52.256989 ignition[883]: Stage: fetch Feb 13 19:56:52.257225 ignition[883]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.257239 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.257347 ignition[883]: parsed url from cmdline: "" Feb 13 19:56:52.257350 ignition[883]: no config URL provided Feb 13 19:56:52.257355 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:56:52.257361 ignition[883]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:56:52.257414 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 19:56:52.340866 ignition[883]: GET result: OK Feb 13 19:56:52.340953 ignition[883]: config has been read from IMDS userdata Feb 13 19:56:52.340975 ignition[883]: parsing config with SHA512: 059ea373429eeb1730519509b43638489a6d7517f07a4003a25cbff0f74c7966fea688908c27a22e178a4eb2901830009fd93c1e39ebe1f0f4265747bdf93ae7 Feb 13 19:56:52.347600 unknown[883]: fetched base config from "system" Feb 13 19:56:52.347772 unknown[883]: fetched base config from "system" Feb 13 19:56:52.348085 ignition[883]: fetch: fetch complete Feb 13 19:56:52.347780 unknown[883]: fetched user config from "azure" Feb 13 19:56:52.348091 ignition[883]: fetch: fetch passed Feb 13 19:56:52.349728 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:56:52.348141 ignition[883]: Ignition finished successfully Feb 13 19:56:52.360891 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:56:52.376615 ignition[889]: Ignition 2.20.0 Feb 13 19:56:52.376626 ignition[889]: Stage: kargs Feb 13 19:56:52.378761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:56:52.376867 ignition[889]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.376880 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.377647 ignition[889]: kargs: kargs passed Feb 13 19:56:52.377697 ignition[889]: Ignition finished successfully Feb 13 19:56:52.392580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:56:52.407248 ignition[895]: Ignition 2.20.0 Feb 13 19:56:52.407260 ignition[895]: Stage: disks Feb 13 19:56:52.409221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:56:52.407520 ignition[895]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:52.407533 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:52.408257 ignition[895]: disks: disks passed Feb 13 19:56:52.408300 ignition[895]: Ignition finished successfully Feb 13 19:56:52.424336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:52.427293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:56:52.433465 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:56:52.439123 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:56:52.446838 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:56:52.456561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:56:52.481729 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 19:56:52.486161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:56:52.500547 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:56:52.593646 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:56:52.594363 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:56:52.599165 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:56:52.615538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:52.621060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:56:52.631518 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Feb 13 19:56:52.639853 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:52.639935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:52.640165 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:56:52.646557 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:52.649787 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:56:52.663550 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:52.649830 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:52.655337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:56:52.668399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:56:52.669721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:52.826902 coreos-metadata[916]: Feb 13 19:56:52.826 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:56:52.833235 coreos-metadata[916]: Feb 13 19:56:52.833 INFO Fetch successful Feb 13 19:56:52.836395 coreos-metadata[916]: Feb 13 19:56:52.833 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:56:52.844000 coreos-metadata[916]: Feb 13 19:56:52.843 INFO Fetch successful Feb 13 19:56:52.848890 coreos-metadata[916]: Feb 13 19:56:52.847 INFO wrote hostname ci-4186.1.1-a-5a2e75f9ad to /sysroot/etc/hostname Feb 13 19:56:52.851170 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:56:52.864076 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:56:52.882262 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:56:52.887597 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:56:52.898190 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:56:53.132448 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:53.144542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:56:53.151597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:56:53.161476 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:53.160634 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:56:53.195608 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:56:53.199499 ignition[1033]: INFO : Ignition 2.20.0 Feb 13 19:56:53.203372 ignition[1033]: INFO : Stage: mount Feb 13 19:56:53.203372 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:53.203372 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:53.203372 ignition[1033]: INFO : mount: mount passed Feb 13 19:56:53.203372 ignition[1033]: INFO : Ignition finished successfully Feb 13 19:56:53.213760 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:56:53.226498 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:56:53.235050 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:53.258411 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Feb 13 19:56:53.262401 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:56:53.262449 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:56:53.267446 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:56:53.273404 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:56:53.274812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:53.299712 ignition[1062]: INFO : Ignition 2.20.0 Feb 13 19:56:53.299712 ignition[1062]: INFO : Stage: files Feb 13 19:56:53.304000 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:53.304000 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:53.310100 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:56:53.319472 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:56:53.319472 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:56:53.337745 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:56:53.341760 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:56:53.341760 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:56:53.338238 unknown[1062]: wrote ssh authorized keys file for user: core Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:53.350590 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:56:53.519717 systemd-networkd[872]: enP36422s1: Gained IPv6LL Feb 13 19:56:53.902034 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:56:53.967585 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 19:56:54.220692 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:56:54.226611 ignition[1062]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:54.231285 ignition[1062]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:54.231285 ignition[1062]: INFO : files: files passed Feb 13 19:56:54.231285 ignition[1062]: INFO : Ignition finished successfully Feb 13 19:56:54.241469 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:56:54.250585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:56:54.257416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:56:54.260640 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:56:54.262437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:56:54.277795 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.277795 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.289305 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:54.280246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:54.285930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:56:54.307677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:56:54.345903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:56:54.346026 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:56:54.352096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:56:54.360393 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:56:54.365452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:56:54.378691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:56:54.393673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:54.406584 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:56:54.423117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:54.424270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:54.425149 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:56:54.425598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:56:54.425745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:54.426438 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:56:54.426854 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:56:54.427354 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:56:54.428189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:54.428633 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:54.429033 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:56:54.429648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:54.430066 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:56:54.430479 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:56:54.430890 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:56:54.431332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:56:54.431527 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:54.432242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:54.433391 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:54.433829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:56:54.471628 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:54.524523 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:56:54.524762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:54.530444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:56:54.530623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:54.538365 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:56:54.541246 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:56:54.553966 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:56:54.557141 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:56:54.569637 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:56:54.588082 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:56:54.590681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:56:54.590892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:54.594476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:56:54.594643 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:54.610933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:56:54.611046 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:56:54.621544 ignition[1114]: INFO : Ignition 2.20.0 Feb 13 19:56:54.621544 ignition[1114]: INFO : Stage: umount Feb 13 19:56:54.621544 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:54.621544 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:56:54.621544 ignition[1114]: INFO : umount: umount passed Feb 13 19:56:54.621544 ignition[1114]: INFO : Ignition finished successfully Feb 13 19:56:54.624628 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:56:54.624736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:56:54.628988 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:56:54.629098 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:56:54.633549 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:56:54.633606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:56:54.636804 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:56:54.636852 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:56:54.637154 systemd[1]: Stopped target network.target - Network. Feb 13 19:56:54.637570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:56:54.637615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:54.638429 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:56:54.638926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:56:54.686273 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:54.693652 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:56:54.696002 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:56:54.700551 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:56:54.702868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:54.709756 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:56:54.709820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:54.714576 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:56:54.714649 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:56:54.719691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:56:54.719750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:54.725063 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:56:54.729761 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:56:54.735940 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:56:54.743917 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 19:56:54.745887 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:56:54.746035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:56:54.750793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:56:54.750882 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:54.768565 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:56:54.773306 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:56:54.773402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:54.777130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:54.784088 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:56:54.784201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:56:54.795110 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:56:54.795297 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:54.803953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:56:54.804046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:54.813237 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:56:54.815905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:54.818795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:56:54.818850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:54.831120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:56:54.831191 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:54.836125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:54.836180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:54.852579 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: Data path switched from VF: enP36422s1 Feb 13 19:56:54.851947 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:56:54.852911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:56:54.852973 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:54.853361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:56:54.853408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:54.853764 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:56:54.853799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:54.854186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:56:54.854222 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:54.890742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:56:54.890843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:54.894288 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:56:54.894344 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:54.899918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:54.899992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:54.918065 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:56:54.918228 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:56:54.927835 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:56:54.927958 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:56:55.005953 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:56:55.006155 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:56:55.012104 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:56:55.019571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:56:55.019663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:55.032682 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:56:55.053278 systemd[1]: Switching root. Feb 13 19:56:55.094783 systemd-journald[177]: Journal stopped Feb 13 19:56:57.625157 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 19:56:57.625191 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:56:57.625203 kernel: SELinux: policy capability open_perms=1 Feb 13 19:56:57.625215 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:56:57.625223 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:56:57.625233 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:56:57.625244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:56:57.625258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:56:57.625267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:56:57.625278 kernel: audit: type=1403 audit(1739476615.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:56:57.625288 systemd[1]: Successfully loaded SELinux policy in 78.715ms. Feb 13 19:56:57.625301 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.160ms. Feb 13 19:56:57.625312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:56:57.625324 systemd[1]: Detected virtualization microsoft. Feb 13 19:56:57.625337 systemd[1]: Detected architecture x86-64. Feb 13 19:56:57.625346 systemd[1]: Detected first boot. Feb 13 19:56:57.625356 systemd[1]: Hostname set to . Feb 13 19:56:57.625366 systemd[1]: Initializing machine ID from random generator. Feb 13 19:56:57.625378 zram_generator::config[1157]: No configuration found. Feb 13 19:56:57.625462 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:56:57.625473 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:56:57.625486 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:56:57.625496 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:56:57.625506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:56:57.625519 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:56:57.625529 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:56:57.625546 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:56:57.625556 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:56:57.625569 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:56:57.625579 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:56:57.625589 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:56:57.625599 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:57.625609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:57.625619 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:56:57.625631 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:56:57.625641 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:56:57.625651 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:56:57.625661 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:56:57.625673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:57.625685 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:56:57.625699 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:56:57.625711 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:56:57.625725 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:56:57.625738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:57.625749 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:56:57.625759 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:56:57.625771 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:56:57.625781 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:56:57.625794 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:56:57.625807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:57.625821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:57.625834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:57.625846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:56:57.625857 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:56:57.625872 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:56:57.625884 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:56:57.625896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:57.625908 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:56:57.625919 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:56:57.625932 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:56:57.625946 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:56:57.625957 systemd[1]: Reached target machines.target - Containers. Feb 13 19:56:57.625972 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:56:57.625984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:57.625994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:56:57.626008 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:56:57.626018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:57.626031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:56:57.626041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:57.626054 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:56:57.626065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:57.626081 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:56:57.626092 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:56:57.626104 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:56:57.626115 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:56:57.626127 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:56:57.626138 kernel: ACPI: bus type drm_connector registered Feb 13 19:56:57.626149 kernel: fuse: init (API version 7.39) Feb 13 19:56:57.626159 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:56:57.626175 kernel: loop: module loaded Feb 13 19:56:57.626187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:56:57.626199 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:56:57.626213 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:56:57.626242 systemd-journald[1263]: Collecting audit messages is disabled. Feb 13 19:56:57.626273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:56:57.626284 systemd-journald[1263]: Journal started Feb 13 19:56:57.626310 systemd-journald[1263]: Runtime Journal (/run/log/journal/8303b335ffcd40c1ab8676d526fcf0c0) is 8.0M, max 158.8M, 150.8M free. Feb 13 19:56:56.976887 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:56:57.053089 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:56:57.053546 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:56:57.636411 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:56:57.636455 systemd[1]: Stopped verity-setup.service. Feb 13 19:56:57.647410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:57.654294 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:56:57.655559 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:56:57.658347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:56:57.661599 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:56:57.664303 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:56:57.667364 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:56:57.670668 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:56:57.673740 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:56:57.677292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:57.681615 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:56:57.681863 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:56:57.685788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:57.686047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:57.689933 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:56:57.690251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:56:57.693865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:57.694170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:57.698008 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:56:57.698301 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:56:57.702032 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:57.702350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:57.706135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:57.709828 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:56:57.713927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:56:57.732819 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:56:57.741568 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:56:57.747735 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:56:57.750782 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:56:57.750868 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:56:57.755069 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:56:57.762984 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:56:57.769881 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:56:57.772562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:57.811734 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:56:57.816268 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:56:57.820498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:56:57.827715 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:56:57.831581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:56:57.834532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:57.846211 systemd-journald[1263]: Time spent on flushing to /var/log/journal/8303b335ffcd40c1ab8676d526fcf0c0 is 45.993ms for 938 entries. Feb 13 19:56:57.846211 systemd-journald[1263]: System Journal (/var/log/journal/8303b335ffcd40c1ab8676d526fcf0c0) is 8.0M, max 2.6G, 2.6G free. Feb 13 19:56:57.959344 systemd-journald[1263]: Received client request to flush runtime journal. Feb 13 19:56:57.959506 kernel: loop0: detected capacity change from 0 to 28304 Feb 13 19:56:57.850540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:56:57.855594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:56:57.869256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:57.875113 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:56:57.880894 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:56:57.885643 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:56:57.893867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:56:57.905003 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:56:57.919369 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:56:57.931628 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:56:57.939510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:57.965081 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:56:57.974028 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:56:58.022056 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 19:56:58.022083 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 19:56:58.031706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:56:58.033637 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:56:58.041847 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:58.057617 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:56:58.079418 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:56:58.111456 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:56:58.144500 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:56:58.169230 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:56:58.185586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:56:58.209965 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Feb 13 19:56:58.209991 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Feb 13 19:56:58.217981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:58.293425 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 19:56:58.454578 kernel: loop4: detected capacity change from 0 to 28304 Feb 13 19:56:58.475813 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 19:56:58.497424 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 19:56:58.517421 kernel: loop7: detected capacity change from 0 to 141000 Feb 13 19:56:58.532993 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 19:56:58.533693 (sd-merge)[1321]: Merged extensions into '/usr'. Feb 13 19:56:58.546667 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:56:58.546868 systemd[1]: Reloading... Feb 13 19:56:58.631450 zram_generator::config[1344]: No configuration found. Feb 13 19:56:58.865968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:58.966731 systemd[1]: Reloading finished in 419 ms. Feb 13 19:56:59.000117 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:56:59.009595 systemd[1]: Starting ensure-sysext.service... Feb 13 19:56:59.017657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:56:59.061986 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:56:59.062533 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:56:59.064807 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:56:59.066038 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Feb 13 19:56:59.066796 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Feb 13 19:56:59.076440 systemd[1]: Reloading requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:56:59.076594 systemd[1]: Reloading... Feb 13 19:56:59.094098 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:56:59.094112 systemd-tmpfiles[1406]: Skipping /boot Feb 13 19:56:59.135050 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:56:59.135288 systemd-tmpfiles[1406]: Skipping /boot Feb 13 19:56:59.226414 zram_generator::config[1438]: No configuration found. Feb 13 19:56:59.359888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:59.434287 systemd[1]: Reloading finished in 357 ms. Feb 13 19:56:59.452049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:56:59.459926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:59.484598 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:56:59.491593 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:56:59.498779 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:56:59.506581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:56:59.519628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:59.529583 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:56:59.551950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.552214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:59.563721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:59.572694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:59.584957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:59.590023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:59.597354 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:56:59.600063 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.605996 systemd-udevd[1505]: Using default interface naming scheme 'v255'. Feb 13 19:56:59.609365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:59.609902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:59.616218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:59.617476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:59.621930 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:59.622122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:59.638284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.639703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:59.647486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:59.662645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:59.674729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:59.678596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:59.678776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.685889 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:56:59.692368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:56:59.706286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:59.707025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:59.715830 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:56:59.721093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:59.721297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:59.723933 augenrules[1535]: No rules Feb 13 19:56:59.725450 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:59.725630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:59.729354 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:56:59.729676 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:56:59.754513 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Feb 13 19:56:59.758231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.763679 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:56:59.768857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:59.776673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:59.795719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:56:59.814463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:59.835531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:59.841887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:59.846702 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:56:59.849743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:56:59.856379 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:59.862622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:59.863024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:59.875309 systemd[1]: Finished ensure-sysext.service. Feb 13 19:56:59.882336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:59.882586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:59.895977 systemd-resolved[1501]: Positive Trust Anchors: Feb 13 19:56:59.895999 systemd-resolved[1501]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:56:59.896054 systemd-resolved[1501]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:56:59.905941 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:56:59.906200 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:56:59.913892 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:59.914240 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:59.939097 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:56:59.943074 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:56:59.943163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:56:59.947844 systemd-resolved[1501]: Using system hostname 'ci-4186.1.1-a-5a2e75f9ad'. Feb 13 19:56:59.957851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:56:59.963880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:59.974502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:56:59.975210 augenrules[1546]: /sbin/augenrules: No change Feb 13 19:56:59.998780 ldconfig[1288]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:57:00.008725 augenrules[1596]: No rules Feb 13 19:57:00.010915 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:57:00.011190 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:57:00.014779 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:57:00.032610 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:57:00.074360 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:57:00.084184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:57:00.098403 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:57:00.101935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:57:00.125414 kernel: hv_vmbus: registering driver hv_balloon Feb 13 19:57:00.130532 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 19:57:00.139416 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 19:57:00.139517 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 19:57:00.142475 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 19:57:00.155617 kernel: Console: switching to colour dummy device 80x25 Feb 13 19:57:00.159410 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:57:00.174707 systemd-networkd[1583]: lo: Link UP Feb 13 19:57:00.174720 systemd-networkd[1583]: lo: Gained carrier Feb 13 19:57:00.181531 systemd-networkd[1583]: Enumeration completed Feb 13 19:57:00.181697 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:57:00.185986 systemd[1]: Reached target network.target - Network. Feb 13 19:57:00.188666 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:57:00.188678 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:57:00.197598 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:57:00.297574 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Feb 13 19:57:00.397226 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1551) Feb 13 19:57:00.401430 kernel: mlx5_core 8e46:00:02.0 enP36422s1: Link up Feb 13 19:57:00.403797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:57:00.421468 kernel: hv_netvsc 7c1e5236-d83d-7c1e-5236-d83d7c1e5236 eth0: Data path switched to VF: enP36422s1 Feb 13 19:57:00.424249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:57:00.424524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:57:00.434175 systemd-networkd[1583]: enP36422s1: Link UP Feb 13 19:57:00.435497 systemd-networkd[1583]: eth0: Link UP Feb 13 19:57:00.435503 systemd-networkd[1583]: eth0: Gained carrier Feb 13 19:57:00.435533 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:57:00.437978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:57:00.441377 systemd-networkd[1583]: enP36422s1: Gained carrier Feb 13 19:57:00.482560 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 19:57:00.500723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:57:00.500978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:57:00.515591 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 19:57:00.522512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:57:00.598661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:57:00.610618 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:57:00.638759 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:57:00.649514 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:57:00.656734 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:57:00.679956 lvm[1694]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:57:00.717933 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:57:00.722069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:57:00.732850 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:57:00.738499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:57:00.749893 lvm[1698]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:57:00.743115 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:57:00.746591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:57:00.750754 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:57:00.755553 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:57:00.758668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:57:00.761958 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:57:00.765581 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:57:00.765626 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:57:00.767904 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:57:00.774451 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:57:00.779342 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:57:00.786379 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:57:00.790051 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:57:00.793457 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:57:00.797022 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:57:00.799492 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:57:00.801974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:57:00.802013 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:57:00.810518 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 19:57:00.814595 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:57:00.820584 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:57:00.837225 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:57:00.842982 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:57:00.857147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:57:00.861166 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:57:00.861238 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 19:57:00.865568 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 19:57:00.866879 (chronyd)[1703]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 19:57:00.869166 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 19:57:00.870626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:57:00.878607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:57:00.881852 chronyd[1717]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 19:57:00.890588 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:57:00.897567 KVP[1712]: KVP starting; pid is:1712 Feb 13 19:57:00.902644 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:57:00.906116 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:57:00.906799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:57:00.908320 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:57:00.915513 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:57:00.922232 jq[1707]: false Feb 13 19:57:00.925642 kernel: hv_utils: KVP IC version 4.0 Feb 13 19:57:00.925226 KVP[1712]: KVP LIC Version: 3.1 Feb 13 19:57:00.927603 chronyd[1717]: Timezone right/UTC failed leap second check, ignoring Feb 13 19:57:00.927900 chronyd[1717]: Loaded seccomp filter (level 2) Feb 13 19:57:00.931975 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 19:57:00.938067 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:57:00.938374 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:57:00.944205 extend-filesystems[1711]: Found loop4 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found loop5 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found loop6 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found loop7 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda1 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda2 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda3 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found usr Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda4 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda6 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda7 Feb 13 19:57:00.950065 extend-filesystems[1711]: Found sda9 Feb 13 19:57:00.950065 extend-filesystems[1711]: Checking size of /dev/sda9 Feb 13 19:57:00.965475 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:57:00.965753 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:57:01.007289 jq[1721]: true Feb 13 19:57:01.035605 (ntainerd)[1731]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:57:01.041548 extend-filesystems[1711]: Old size kept for /dev/sda9 Feb 13 19:57:01.045717 extend-filesystems[1711]: Found sr0 Feb 13 19:57:01.049555 update_engine[1720]: I20250213 19:57:01.043661 1720 main.cc:92] Flatcar Update Engine starting Feb 13 19:57:01.047615 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:57:01.047870 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:57:01.056840 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:57:01.057431 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:57:01.059746 jq[1737]: true Feb 13 19:57:01.062760 systemd-logind[1719]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:57:01.063436 systemd-logind[1719]: New seat seat0. Feb 13 19:57:01.069917 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:57:01.073692 dbus-daemon[1706]: [system] SELinux support is enabled Feb 13 19:57:01.075955 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:57:01.090141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:57:01.090273 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:57:01.096122 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:57:01.096160 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:57:01.103224 update_engine[1720]: I20250213 19:57:01.101336 1720 update_check_scheduler.cc:74] Next update check in 3m23s Feb 13 19:57:01.108093 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:57:01.111318 dbus-daemon[1706]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:57:01.127748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:57:01.170426 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1565) Feb 13 19:57:01.189342 coreos-metadata[1705]: Feb 13 19:57:01.185 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:57:01.191569 coreos-metadata[1705]: Feb 13 19:57:01.191 INFO Fetch successful Feb 13 19:57:01.191957 coreos-metadata[1705]: Feb 13 19:57:01.191 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 19:57:01.198323 coreos-metadata[1705]: Feb 13 19:57:01.198 INFO Fetch successful Feb 13 19:57:01.198478 coreos-metadata[1705]: Feb 13 19:57:01.198 INFO Fetching http://168.63.129.16/machine/e37a74e5-a101-4006-8141-d2718cb8497f/46e0661e%2D73a6%2D4b81%2D84d7%2D399b93d24d13.%5Fci%2D4186.1.1%2Da%2D5a2e75f9ad?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 19:57:01.200544 coreos-metadata[1705]: Feb 13 19:57:01.200 INFO Fetch successful Feb 13 19:57:01.200818 coreos-metadata[1705]: Feb 13 19:57:01.200 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:57:01.214108 bash[1772]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:57:01.216881 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:57:01.226338 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:57:01.227575 coreos-metadata[1705]: Feb 13 19:57:01.227 INFO Fetch successful Feb 13 19:57:01.308494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:57:01.316806 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:57:01.419442 locksmithd[1758]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:57:01.583594 systemd-networkd[1583]: eth0: Gained IPv6LL Feb 13 19:57:01.589627 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:57:01.598332 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:57:01.613625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:01.619707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:57:01.640197 sshd_keygen[1735]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:57:01.695554 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:57:01.700308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:57:01.714532 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:57:01.718135 containerd[1731]: time="2025-02-13T19:57:01.717651500Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:57:01.722595 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 19:57:01.734344 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:57:01.734606 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:57:01.751350 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:57:01.774224 containerd[1731]: time="2025-02-13T19:57:01.774137300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.776347 containerd[1731]: time="2025-02-13T19:57:01.776298800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:57:01.776481 containerd[1731]: time="2025-02-13T19:57:01.776461100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:57:01.776559 containerd[1731]: time="2025-02-13T19:57:01.776544100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:57:01.776850 containerd[1731]: time="2025-02-13T19:57:01.776825700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:57:01.776956 containerd[1731]: time="2025-02-13T19:57:01.776940300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777147 containerd[1731]: time="2025-02-13T19:57:01.777116100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777436 containerd[1731]: time="2025-02-13T19:57:01.777206500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777592 containerd[1731]: time="2025-02-13T19:57:01.777567700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777686 containerd[1731]: time="2025-02-13T19:57:01.777667300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777769 containerd[1731]: time="2025-02-13T19:57:01.777751300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:57:01.777834 containerd[1731]: time="2025-02-13T19:57:01.777820400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.778469 containerd[1731]: time="2025-02-13T19:57:01.777992700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.778469 containerd[1731]: time="2025-02-13T19:57:01.778289700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:57:01.778679 containerd[1731]: time="2025-02-13T19:57:01.778655100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:57:01.778749 containerd[1731]: time="2025-02-13T19:57:01.778734700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:57:01.778927 containerd[1731]: time="2025-02-13T19:57:01.778903600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:57:01.779081 containerd[1731]: time="2025-02-13T19:57:01.779062100Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:57:01.796162 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 19:57:01.797674 containerd[1731]: time="2025-02-13T19:57:01.797494800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:57:01.797674 containerd[1731]: time="2025-02-13T19:57:01.797594200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:57:01.797674 containerd[1731]: time="2025-02-13T19:57:01.797615500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:57:01.798113 containerd[1731]: time="2025-02-13T19:57:01.797800900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:57:01.798113 containerd[1731]: time="2025-02-13T19:57:01.797850800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:57:01.798113 containerd[1731]: time="2025-02-13T19:57:01.798001100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798671100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798799300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798819200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798836400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798855500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798884300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798899600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798915200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798933000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798952300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798966900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.798981000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.799004700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799105 containerd[1731]: time="2025-02-13T19:57:01.799021600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799593 containerd[1731]: time="2025-02-13T19:57:01.799035000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799593 containerd[1731]: time="2025-02-13T19:57:01.799050000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799593 containerd[1731]: time="2025-02-13T19:57:01.799066700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799593 containerd[1731]: time="2025-02-13T19:57:01.799083900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.799943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800686500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800715000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800731600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800751500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800767400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800784300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800798900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800817300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800844000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800861700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.801023 containerd[1731]: time="2025-02-13T19:57:01.800874400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801301700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801329800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801421200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801440100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801452000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801467300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801479300Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:57:01.802008 containerd[1731]: time="2025-02-13T19:57:01.801498500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:57:01.802238 containerd[1731]: time="2025-02-13T19:57:01.801799800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:57:01.802238 containerd[1731]: time="2025-02-13T19:57:01.801850600Z" level=info msg="Connect containerd service" Feb 13 19:57:01.802238 containerd[1731]: time="2025-02-13T19:57:01.801881700Z" level=info msg="using legacy CRI server" Feb 13 19:57:01.802238 containerd[1731]: time="2025-02-13T19:57:01.801889600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:57:01.802623 containerd[1731]: time="2025-02-13T19:57:01.802602100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:57:01.803330 containerd[1731]: time="2025-02-13T19:57:01.803300900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.803842700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.803913900Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804000500Z" level=info msg="Start subscribing containerd event" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804051200Z" level=info msg="Start recovering state" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804131300Z" level=info msg="Start event monitor" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804150400Z" level=info msg="Start snapshots syncer" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804164300Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804175500Z" level=info msg="Start streaming server" Feb 13 19:57:01.804406 containerd[1731]: time="2025-02-13T19:57:01.804248300Z" level=info msg="containerd successfully booted in 0.091068s" Feb 13 19:57:01.804808 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:57:01.821701 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:57:01.834884 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:57:01.838538 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:57:01.904706 systemd-networkd[1583]: enP36422s1: Gained IPv6LL Feb 13 19:57:02.734739 waagent[1864]: 2025-02-13T19:57:02.734287Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 19:57:02.738977 waagent[1864]: 2025-02-13T19:57:02.737764Z INFO Daemon Daemon OS: flatcar 4186.1.1 Feb 13 19:57:02.740333 waagent[1864]: 2025-02-13T19:57:02.740263Z INFO Daemon Daemon Python: 3.11.10 Feb 13 19:57:02.743131 waagent[1864]: 2025-02-13T19:57:02.742683Z INFO Daemon Daemon Run daemon Feb 13 19:57:02.745649 waagent[1864]: 2025-02-13T19:57:02.745042Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.1' Feb 13 19:57:02.750403 waagent[1864]: 2025-02-13T19:57:02.749538Z INFO Daemon Daemon Using waagent for provisioning Feb 13 19:57:02.752629 waagent[1864]: 2025-02-13T19:57:02.752552Z INFO Daemon Daemon Activate resource disk Feb 13 19:57:02.755344 waagent[1864]: 2025-02-13T19:57:02.755115Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 19:57:02.767263 waagent[1864]: 2025-02-13T19:57:02.765962Z INFO Daemon Daemon Found device: None Feb 13 19:57:02.769973 waagent[1864]: 2025-02-13T19:57:02.768340Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 19:57:02.772414 waagent[1864]: 2025-02-13T19:57:02.772308Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 19:57:02.778274 waagent[1864]: 2025-02-13T19:57:02.778217Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:57:02.780987 waagent[1864]: 2025-02-13T19:57:02.780922Z INFO Daemon Daemon Running default provisioning handler Feb 13 19:57:02.794958 waagent[1864]: 2025-02-13T19:57:02.793532Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 19:57:02.800743 waagent[1864]: 2025-02-13T19:57:02.800685Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 19:57:02.805169 waagent[1864]: 2025-02-13T19:57:02.805105Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 19:57:02.808461 waagent[1864]: 2025-02-13T19:57:02.807452Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 19:57:02.854798 waagent[1864]: 2025-02-13T19:57:02.854243Z INFO Daemon Daemon Successfully mounted dvd Feb 13 19:57:02.874313 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 19:57:02.877167 waagent[1864]: 2025-02-13T19:57:02.875729Z INFO Daemon Daemon Detect protocol endpoint Feb 13 19:57:02.878617 waagent[1864]: 2025-02-13T19:57:02.878471Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:57:02.881447 waagent[1864]: 2025-02-13T19:57:02.881377Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 19:57:02.884684 waagent[1864]: 2025-02-13T19:57:02.884593Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 19:57:02.887304 waagent[1864]: 2025-02-13T19:57:02.887245Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 19:57:02.889625 waagent[1864]: 2025-02-13T19:57:02.889573Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 19:57:02.919786 waagent[1864]: 2025-02-13T19:57:02.919711Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 19:57:02.923769 waagent[1864]: 2025-02-13T19:57:02.923260Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 19:57:02.927382 waagent[1864]: 2025-02-13T19:57:02.926057Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 19:57:02.944252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:02.948140 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:57:02.952082 systemd[1]: Startup finished in 536ms (firmware) + 6.931s (loader) + 1.138s (kernel) + 7.375s (initrd) + 7.095s (userspace) = 23.077s. Feb 13 19:57:02.961784 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:57:02.988980 waagent[1864]: 2025-02-13T19:57:02.988775Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 19:57:02.990265 agetty[1867]: failed to open credentials directory Feb 13 19:57:02.992814 agetty[1868]: failed to open credentials directory Feb 13 19:57:02.994934 waagent[1864]: 2025-02-13T19:57:02.994048Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 19:57:03.013103 waagent[1864]: 2025-02-13T19:57:03.012841Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:57:03.045286 waagent[1864]: 2025-02-13T19:57:03.043515Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 19:57:03.057338 waagent[1864]: 2025-02-13T19:57:03.051343Z INFO Daemon Feb 13 19:57:03.059328 waagent[1864]: 2025-02-13T19:57:03.059260Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 66e451a1-cdd0-4805-811e-4430ad983b3d eTag: 5489419556470388835 source: Fabric] Feb 13 19:57:03.061938 waagent[1864]: 2025-02-13T19:57:03.061887Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 19:57:03.068321 waagent[1864]: 2025-02-13T19:57:03.068116Z INFO Daemon Feb 13 19:57:03.073530 waagent[1864]: 2025-02-13T19:57:03.073271Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:57:03.094998 waagent[1864]: 2025-02-13T19:57:03.094191Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 19:57:03.153472 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:57:03.154838 login[1868]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:57:03.169914 systemd-logind[1719]: New session 1 of user core. Feb 13 19:57:03.172044 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:57:03.178682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:57:03.186765 systemd-logind[1719]: New session 2 of user core. Feb 13 19:57:03.199353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:57:03.207804 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:57:03.221220 (systemd)[1895]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:57:03.260549 waagent[1864]: 2025-02-13T19:57:03.259030Z INFO Daemon Downloaded certificate {'thumbprint': '7B21E1495146170A8D3A8B0C68E9DE32E4DF0663', 'hasPrivateKey': False} Feb 13 19:57:03.264839 waagent[1864]: 2025-02-13T19:57:03.264733Z INFO Daemon Downloaded certificate {'thumbprint': '0A3ADD4F11B6909B11496E79E72AE07EE8FB5CF2', 'hasPrivateKey': True} Feb 13 19:57:03.270990 waagent[1864]: 2025-02-13T19:57:03.270094Z INFO Daemon Fetch goal state completed Feb 13 19:57:03.305892 waagent[1864]: 2025-02-13T19:57:03.305796Z INFO Daemon Daemon Starting provisioning Feb 13 19:57:03.309427 waagent[1864]: 2025-02-13T19:57:03.308598Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 19:57:03.311335 waagent[1864]: 2025-02-13T19:57:03.311253Z INFO Daemon Daemon Set hostname [ci-4186.1.1-a-5a2e75f9ad] Feb 13 19:57:03.358843 waagent[1864]: 2025-02-13T19:57:03.358714Z INFO Daemon Daemon Publish hostname [ci-4186.1.1-a-5a2e75f9ad] Feb 13 19:57:03.364167 waagent[1864]: 2025-02-13T19:57:03.364062Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 19:57:03.369409 waagent[1864]: 2025-02-13T19:57:03.368171Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 19:57:03.397867 systemd[1895]: Queued start job for default target default.target. Feb 13 19:57:03.399570 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.401357Z INFO Daemon Daemon Create user account if not exists Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.404557Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.405719Z INFO Daemon Daemon Configure sudoer Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.407335Z INFO Daemon Daemon Configure sshd Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.408617Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 19:57:03.419755 waagent[1864]: 2025-02-13T19:57:03.409306Z INFO Daemon Daemon Deploy ssh public key. Feb 13 19:57:03.399576 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:57:03.399636 systemd-networkd[1583]: eth0: DHCP lease lost Feb 13 19:57:03.418674 systemd-networkd[1583]: eth0: DHCPv6 lease lost Feb 13 19:57:03.422080 systemd[1895]: Created slice app.slice - User Application Slice. Feb 13 19:57:03.422289 systemd[1895]: Reached target paths.target - Paths. Feb 13 19:57:03.422429 systemd[1895]: Reached target timers.target - Timers. Feb 13 19:57:03.428546 systemd[1895]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:57:03.457178 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 19:57:03.457793 systemd[1895]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:57:03.457882 systemd[1895]: Reached target sockets.target - Sockets. Feb 13 19:57:03.457903 systemd[1895]: Reached target basic.target - Basic System. Feb 13 19:57:03.457957 systemd[1895]: Reached target default.target - Main User Target. Feb 13 19:57:03.457994 systemd[1895]: Startup finished in 225ms. Feb 13 19:57:03.458814 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:57:03.465138 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:57:03.466247 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:57:03.794417 kubelet[1884]: E0213 19:57:03.794315 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:57:03.797094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:57:03.797302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:57:04.508882 waagent[1864]: 2025-02-13T19:57:04.508791Z INFO Daemon Daemon Provisioning complete Feb 13 19:57:04.523184 waagent[1864]: 2025-02-13T19:57:04.523107Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 19:57:04.529838 waagent[1864]: 2025-02-13T19:57:04.524320Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 19:57:04.529838 waagent[1864]: 2025-02-13T19:57:04.525230Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 19:57:04.675746 waagent[1943]: 2025-02-13T19:57:04.675623Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 19:57:04.676208 waagent[1943]: 2025-02-13T19:57:04.675827Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.1 Feb 13 19:57:04.676208 waagent[1943]: 2025-02-13T19:57:04.675912Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 19:57:04.693703 waagent[1943]: 2025-02-13T19:57:04.693612Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 19:57:04.693916 waagent[1943]: 2025-02-13T19:57:04.693867Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:57:04.694016 waagent[1943]: 2025-02-13T19:57:04.693974Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:57:04.702028 waagent[1943]: 2025-02-13T19:57:04.701958Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:57:04.708684 waagent[1943]: 2025-02-13T19:57:04.708626Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 19:57:04.709215 waagent[1943]: 2025-02-13T19:57:04.709158Z INFO ExtHandler Feb 13 19:57:04.709302 waagent[1943]: 2025-02-13T19:57:04.709255Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8fc085f3-242d-4823-9051-bf3a68846fde eTag: 5489419556470388835 source: Fabric] Feb 13 19:57:04.709653 waagent[1943]: 2025-02-13T19:57:04.709600Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 19:57:04.710240 waagent[1943]: 2025-02-13T19:57:04.710182Z INFO ExtHandler Feb 13 19:57:04.710305 waagent[1943]: 2025-02-13T19:57:04.710266Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:57:04.713380 waagent[1943]: 2025-02-13T19:57:04.713330Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 19:57:04.786705 waagent[1943]: 2025-02-13T19:57:04.786538Z INFO ExtHandler Downloaded certificate {'thumbprint': '7B21E1495146170A8D3A8B0C68E9DE32E4DF0663', 'hasPrivateKey': False} Feb 13 19:57:04.787123 waagent[1943]: 2025-02-13T19:57:04.787065Z INFO ExtHandler Downloaded certificate {'thumbprint': '0A3ADD4F11B6909B11496E79E72AE07EE8FB5CF2', 'hasPrivateKey': True} Feb 13 19:57:04.787634 waagent[1943]: 2025-02-13T19:57:04.787581Z INFO ExtHandler Fetch goal state completed Feb 13 19:57:04.801910 waagent[1943]: 2025-02-13T19:57:04.801848Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1943 Feb 13 19:57:04.802073 waagent[1943]: 2025-02-13T19:57:04.802030Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 19:57:04.803740 waagent[1943]: 2025-02-13T19:57:04.803686Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 19:57:04.804167 waagent[1943]: 2025-02-13T19:57:04.804118Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 19:57:04.817087 waagent[1943]: 2025-02-13T19:57:04.817048Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 19:57:04.817278 waagent[1943]: 2025-02-13T19:57:04.817233Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 19:57:04.825081 waagent[1943]: 2025-02-13T19:57:04.824811Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 19:57:04.833048 systemd[1]: Reloading requested from client PID 1958 ('systemctl') (unit waagent.service)... Feb 13 19:57:04.833068 systemd[1]: Reloading... Feb 13 19:57:04.940421 zram_generator::config[1995]: No configuration found. Feb 13 19:57:05.060951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:57:05.150503 systemd[1]: Reloading finished in 316 ms. Feb 13 19:57:05.178830 waagent[1943]: 2025-02-13T19:57:05.178711Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 19:57:05.185584 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit waagent.service)... Feb 13 19:57:05.185611 systemd[1]: Reloading... Feb 13 19:57:05.263463 zram_generator::config[2079]: No configuration found. Feb 13 19:57:05.405519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:57:05.496772 systemd[1]: Reloading finished in 310 ms. Feb 13 19:57:05.524434 waagent[1943]: 2025-02-13T19:57:05.521745Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 19:57:05.524434 waagent[1943]: 2025-02-13T19:57:05.522014Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 19:57:05.656590 waagent[1943]: 2025-02-13T19:57:05.656404Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 19:57:05.657243 waagent[1943]: 2025-02-13T19:57:05.657170Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 19:57:05.658206 waagent[1943]: 2025-02-13T19:57:05.658150Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 19:57:05.658337 waagent[1943]: 2025-02-13T19:57:05.658290Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:57:05.658514 waagent[1943]: 2025-02-13T19:57:05.658464Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:57:05.658983 waagent[1943]: 2025-02-13T19:57:05.658928Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 19:57:05.659158 waagent[1943]: 2025-02-13T19:57:05.659108Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:57:05.660325 waagent[1943]: 2025-02-13T19:57:05.660271Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 19:57:05.660452 waagent[1943]: 2025-02-13T19:57:05.660370Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 19:57:05.660533 waagent[1943]: 2025-02-13T19:57:05.660448Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 19:57:05.660971 waagent[1943]: 2025-02-13T19:57:05.660918Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 19:57:05.661218 waagent[1943]: 2025-02-13T19:57:05.661173Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 19:57:05.661586 waagent[1943]: 2025-02-13T19:57:05.661531Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 19:57:05.661586 waagent[1943]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 19:57:05.661586 waagent[1943]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 19:57:05.661586 waagent[1943]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 19:57:05.661586 waagent[1943]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:57:05.661586 waagent[1943]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:57:05.661586 waagent[1943]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:57:05.661838 waagent[1943]: 2025-02-13T19:57:05.661593Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 19:57:05.662188 waagent[1943]: 2025-02-13T19:57:05.662139Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:57:05.662407 waagent[1943]: 2025-02-13T19:57:05.662335Z INFO EnvHandler ExtHandler Configure routes Feb 13 19:57:05.662511 waagent[1943]: 2025-02-13T19:57:05.662468Z INFO EnvHandler ExtHandler Gateway:None Feb 13 19:57:05.662608 waagent[1943]: 2025-02-13T19:57:05.662567Z INFO EnvHandler ExtHandler Routes:None Feb 13 19:57:05.668715 waagent[1943]: 2025-02-13T19:57:05.668669Z INFO ExtHandler ExtHandler Feb 13 19:57:05.669402 waagent[1943]: 2025-02-13T19:57:05.669316Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: da1d3b4c-2a1d-460c-9638-d6ca6f825049 correlation 220f4892-95a9-450a-9f21-c59647946ba7 created: 2025-02-13T19:56:29.595825Z] Feb 13 19:57:05.673408 waagent[1943]: 2025-02-13T19:57:05.672134Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 19:57:05.673408 waagent[1943]: 2025-02-13T19:57:05.673052Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Feb 13 19:57:05.684687 waagent[1943]: 2025-02-13T19:57:05.684624Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 19:57:05.684687 waagent[1943]: Executing ['ip', '-a', '-o', 'link']: Feb 13 19:57:05.684687 waagent[1943]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 19:57:05.684687 waagent[1943]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:d8:3d brd ff:ff:ff:ff:ff:ff Feb 13 19:57:05.684687 waagent[1943]: 3: enP36422s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:d8:3d brd ff:ff:ff:ff:ff:ff\ altname enP36422p0s2 Feb 13 19:57:05.684687 waagent[1943]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 19:57:05.684687 waagent[1943]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 19:57:05.684687 waagent[1943]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 19:57:05.684687 waagent[1943]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 19:57:05.684687 waagent[1943]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 19:57:05.684687 waagent[1943]: 2: eth0 inet6 fe80::7e1e:52ff:fe36:d83d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:57:05.684687 waagent[1943]: 3: enP36422s1 inet6 fe80::7e1e:52ff:fe36:d83d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:57:05.712532 waagent[1943]: 2025-02-13T19:57:05.712456Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A7BCE64B-8C7C-4255-8350-C379FFF73264;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 19:57:05.733631 waagent[1943]: 2025-02-13T19:57:05.733549Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 19:57:05.733631 waagent[1943]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.733631 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.733631 waagent[1943]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.733631 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.733631 waagent[1943]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.733631 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.733631 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:57:05.733631 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:57:05.733631 waagent[1943]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:57:05.736957 waagent[1943]: 2025-02-13T19:57:05.736891Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 19:57:05.736957 waagent[1943]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.736957 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.736957 waagent[1943]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.736957 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.736957 waagent[1943]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:57:05.736957 waagent[1943]: pkts bytes target prot opt in out source destination Feb 13 19:57:05.736957 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:57:05.736957 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:57:05.736957 waagent[1943]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:57:05.737355 waagent[1943]: 2025-02-13T19:57:05.737221Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 19:57:13.934460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:57:13.940671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:14.059083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:14.064141 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:57:14.742715 kubelet[2179]: E0213 19:57:14.742640 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:57:14.746549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:57:14.746777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:57:24.737370 chronyd[1717]: Selected source PHC0 Feb 13 19:57:24.934583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:57:24.939710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:25.081696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:25.087353 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:57:25.732881 kubelet[2195]: E0213 19:57:25.732817 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:57:25.735459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:57:25.735688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:57:35.934708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:57:35.947639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:36.079273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:36.084404 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:57:36.704224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:57:36.713053 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:48996.service - OpenSSH per-connection server daemon (10.200.16.10:48996). Feb 13 19:57:36.738445 kubelet[2210]: E0213 19:57:36.738395 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:57:36.740716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:57:36.740931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:57:37.382437 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 48996 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:37.384078 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:37.388752 systemd-logind[1719]: New session 3 of user core. Feb 13 19:57:37.396550 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:57:37.935791 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:49004.service - OpenSSH per-connection server daemon (10.200.16.10:49004). Feb 13 19:57:38.570710 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 49004 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:38.572361 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:38.577092 systemd-logind[1719]: New session 4 of user core. Feb 13 19:57:38.587574 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:57:39.016426 sshd[2225]: Connection closed by 10.200.16.10 port 49004 Feb 13 19:57:39.017515 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:39.022260 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:49004.service: Deactivated successfully. Feb 13 19:57:39.024656 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:57:39.025657 systemd-logind[1719]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:57:39.026737 systemd-logind[1719]: Removed session 4. Feb 13 19:57:39.131711 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:38890.service - OpenSSH per-connection server daemon (10.200.16.10:38890). Feb 13 19:57:39.762526 sshd[2230]: Accepted publickey for core from 10.200.16.10 port 38890 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:39.764447 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:39.769865 systemd-logind[1719]: New session 5 of user core. Feb 13 19:57:39.777553 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:57:40.203358 sshd[2232]: Connection closed by 10.200.16.10 port 38890 Feb 13 19:57:40.204538 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:40.209306 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:38890.service: Deactivated successfully. Feb 13 19:57:40.211678 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:57:40.212457 systemd-logind[1719]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:57:40.213451 systemd-logind[1719]: Removed session 5. Feb 13 19:57:40.321800 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:38902.service - OpenSSH per-connection server daemon (10.200.16.10:38902). Feb 13 19:57:40.960662 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 38902 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:40.962516 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:40.967828 systemd-logind[1719]: New session 6 of user core. Feb 13 19:57:40.974573 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:57:41.407434 sshd[2239]: Connection closed by 10.200.16.10 port 38902 Feb 13 19:57:41.408772 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:41.413589 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:38902.service: Deactivated successfully. Feb 13 19:57:41.416095 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:57:41.417120 systemd-logind[1719]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:57:41.418170 systemd-logind[1719]: Removed session 6. Feb 13 19:57:41.527017 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:38910.service - OpenSSH per-connection server daemon (10.200.16.10:38910). Feb 13 19:57:42.154403 sshd[2244]: Accepted publickey for core from 10.200.16.10 port 38910 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:42.156072 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:42.160686 systemd-logind[1719]: New session 7 of user core. Feb 13 19:57:42.169554 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:57:42.524930 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:57:42.525311 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:57:42.543495 sudo[2247]: pam_unix(sudo:session): session closed for user root Feb 13 19:57:42.645130 sshd[2246]: Connection closed by 10.200.16.10 port 38910 Feb 13 19:57:42.646622 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:42.651166 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:38910.service: Deactivated successfully. Feb 13 19:57:42.653349 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:57:42.654284 systemd-logind[1719]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:57:42.655379 systemd-logind[1719]: Removed session 7. Feb 13 19:57:42.760047 systemd[1]: Started sshd@5-10.200.8.15:22-10.200.16.10:38924.service - OpenSSH per-connection server daemon (10.200.16.10:38924). Feb 13 19:57:43.386903 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 38924 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:43.389222 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:43.394411 systemd-logind[1719]: New session 8 of user core. Feb 13 19:57:43.402548 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:57:43.733898 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:57:43.734278 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:57:43.738110 sudo[2256]: pam_unix(sudo:session): session closed for user root Feb 13 19:57:43.743521 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:57:43.743867 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:57:43.764830 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:57:43.792580 augenrules[2278]: No rules Feb 13 19:57:43.794109 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:57:43.794378 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:57:43.796052 sudo[2255]: pam_unix(sudo:session): session closed for user root Feb 13 19:57:43.898095 sshd[2254]: Connection closed by 10.200.16.10 port 38924 Feb 13 19:57:43.899057 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:43.904149 systemd[1]: sshd@5-10.200.8.15:22-10.200.16.10:38924.service: Deactivated successfully. Feb 13 19:57:43.906162 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:57:43.906961 systemd-logind[1719]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:57:43.908062 systemd-logind[1719]: Removed session 8. Feb 13 19:57:44.016992 systemd[1]: Started sshd@6-10.200.8.15:22-10.200.16.10:38932.service - OpenSSH per-connection server daemon (10.200.16.10:38932). Feb 13 19:57:44.643121 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 38932 ssh2: RSA SHA256:/oblFm5MLGIKZJUdm0ayZy4uVmQAy23j9rGHfSmyZCQ Feb 13 19:57:44.644997 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:44.649992 systemd-logind[1719]: New session 9 of user core. Feb 13 19:57:44.657545 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:57:44.987861 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:57:44.988245 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:57:45.484805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:45.491988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:45.533265 systemd[1]: Reloading requested from client PID 2322 ('systemctl') (unit session-9.scope)... Feb 13 19:57:45.533298 systemd[1]: Reloading... Feb 13 19:57:45.661426 zram_generator::config[2364]: No configuration found. Feb 13 19:57:45.803638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:57:45.890945 systemd[1]: Reloading finished in 357 ms. Feb 13 19:57:45.949717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:45.953719 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:57:45.953997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:45.959813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:46.242819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:46.251429 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:57:46.295274 kubelet[2433]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:46.295274 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:57:46.295274 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:46.295848 kubelet[2433]: I0213 19:57:46.295337 2433 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:57:46.343534 update_engine[1720]: I20250213 19:57:46.343436 1720 update_attempter.cc:509] Updating boot flags... Feb 13 19:57:46.909921 kubelet[2433]: I0213 19:57:46.909856 2433 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:57:46.909921 kubelet[2433]: I0213 19:57:46.909904 2433 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:57:46.910471 kubelet[2433]: I0213 19:57:46.910442 2433 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:57:46.951279 kubelet[2433]: I0213 19:57:46.951055 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:57:46.984284 kubelet[2433]: E0213 19:57:46.983905 2433 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:57:46.984284 kubelet[2433]: I0213 19:57:46.983958 2433 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:57:46.993326 kubelet[2433]: I0213 19:57:46.992863 2433 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:57:46.996229 kubelet[2433]: I0213 19:57:46.995604 2433 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:57:46.996229 kubelet[2433]: I0213 19:57:46.995669 2433 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.15","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:57:46.996229 kubelet[2433]: I0213 19:57:46.995949 2433 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:57:46.996229 kubelet[2433]: I0213 19:57:46.995964 2433 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:57:46.996609 kubelet[2433]: I0213 19:57:46.996157 2433 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:47.007771 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2459) Feb 13 19:57:47.007923 kubelet[2433]: I0213 19:57:47.004690 2433 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:57:47.007923 kubelet[2433]: I0213 19:57:47.005319 2433 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:57:47.007923 kubelet[2433]: I0213 19:57:47.005369 2433 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:57:47.007923 kubelet[2433]: I0213 19:57:47.005409 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:57:47.007923 kubelet[2433]: E0213 19:57:47.005968 2433 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:47.007923 kubelet[2433]: E0213 19:57:47.006040 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:47.011270 kubelet[2433]: I0213 19:57:47.011247 2433 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:57:47.012775 kubelet[2433]: I0213 19:57:47.012266 2433 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:57:47.012775 kubelet[2433]: W0213 19:57:47.012491 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:57:47.015990 kubelet[2433]: I0213 19:57:47.015968 2433 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:57:47.017374 kubelet[2433]: I0213 19:57:47.017355 2433 server.go:1287] "Started kubelet" Feb 13 19:57:47.023285 kubelet[2433]: I0213 19:57:47.023237 2433 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:57:47.025850 kubelet[2433]: I0213 19:57:47.025827 2433 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:57:47.029342 kubelet[2433]: I0213 19:57:47.029268 2433 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:57:47.029836 kubelet[2433]: I0213 19:57:47.029812 2433 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:57:47.032403 kubelet[2433]: I0213 19:57:47.024221 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:57:47.032910 kubelet[2433]: I0213 19:57:47.032885 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:57:47.043790 kubelet[2433]: E0213 19:57:47.043763 2433 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Feb 13 19:57:47.045293 kubelet[2433]: I0213 19:57:47.045272 2433 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:57:47.046329 kubelet[2433]: I0213 19:57:47.045705 2433 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:57:47.046329 kubelet[2433]: I0213 19:57:47.045776 2433 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:57:47.048975 kubelet[2433]: E0213 19:57:47.048947 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.15\" not found" node="10.200.8.15" Feb 13 19:57:47.050160 kubelet[2433]: I0213 19:57:47.050133 2433 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:57:47.059368 kubelet[2433]: E0213 19:57:47.059340 2433 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:57:47.063023 kubelet[2433]: I0213 19:57:47.063001 2433 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:57:47.063148 kubelet[2433]: I0213 19:57:47.063134 2433 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:57:47.109998 kubelet[2433]: I0213 19:57:47.109744 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:57:47.141335 kubelet[2433]: I0213 19:57:47.139476 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:57:47.141335 kubelet[2433]: I0213 19:57:47.139547 2433 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:57:47.141335 kubelet[2433]: I0213 19:57:47.139591 2433 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:57:47.141335 kubelet[2433]: I0213 19:57:47.139602 2433 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:57:47.141335 kubelet[2433]: E0213 19:57:47.139770 2433 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:57:47.147419 kubelet[2433]: I0213 19:57:47.147079 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:57:47.147419 kubelet[2433]: I0213 19:57:47.147106 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:57:47.147419 kubelet[2433]: I0213 19:57:47.147134 2433 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:47.159364 kubelet[2433]: E0213 19:57:47.159318 2433 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Feb 13 19:57:47.165426 kubelet[2433]: I0213 19:57:47.164855 2433 policy_none.go:49] "None policy: Start" Feb 13 19:57:47.165426 kubelet[2433]: I0213 19:57:47.164892 2433 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:57:47.165426 kubelet[2433]: I0213 19:57:47.164911 2433 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:57:47.180589 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:57:47.194368 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:57:47.207685 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:57:47.217783 kubelet[2433]: I0213 19:57:47.216602 2433 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:57:47.217783 kubelet[2433]: I0213 19:57:47.216901 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:57:47.217783 kubelet[2433]: I0213 19:57:47.216918 2433 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:57:47.219864 kubelet[2433]: I0213 19:57:47.219845 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:57:47.224839 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2461) Feb 13 19:57:47.225311 kubelet[2433]: E0213 19:57:47.225288 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:57:47.225534 kubelet[2433]: E0213 19:57:47.225499 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.15\" not found" Feb 13 19:57:47.325482 kubelet[2433]: I0213 19:57:47.319328 2433 kubelet_node_status.go:76] "Attempting to register node" node="10.200.8.15" Feb 13 19:57:47.328425 kubelet[2433]: I0213 19:57:47.326676 2433 kubelet_node_status.go:79] "Successfully registered node" node="10.200.8.15" Feb 13 19:57:47.328425 kubelet[2433]: E0213 19:57:47.326780 2433 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.200.8.15\": node \"10.200.8.15\" not found" Feb 13 19:57:47.440833 kubelet[2433]: I0213 19:57:47.440768 2433 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:57:47.441371 containerd[1731]: time="2025-02-13T19:57:47.441330334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:57:47.441939 kubelet[2433]: I0213 19:57:47.441640 2433 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:57:47.781891 sudo[2289]: pam_unix(sudo:session): session closed for user root Feb 13 19:57:47.883868 sshd[2288]: Connection closed by 10.200.16.10 port 38932 Feb 13 19:57:47.884895 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:47.890073 systemd[1]: sshd@6-10.200.8.15:22-10.200.16.10:38932.service: Deactivated successfully. Feb 13 19:57:47.892321 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:57:47.893322 systemd-logind[1719]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:57:47.894488 systemd-logind[1719]: Removed session 9. Feb 13 19:57:47.912452 kubelet[2433]: I0213 19:57:47.912400 2433 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:57:47.912893 kubelet[2433]: W0213 19:57:47.912683 2433 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:57:47.912893 kubelet[2433]: W0213 19:57:47.912732 2433 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:57:47.912893 kubelet[2433]: W0213 19:57:47.912765 2433 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:57:48.007065 kubelet[2433]: I0213 19:57:48.006993 2433 apiserver.go:52] "Watching apiserver" Feb 13 19:57:48.007352 kubelet[2433]: E0213 19:57:48.006987 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:48.011883 kubelet[2433]: E0213 19:57:48.011546 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:57:48.018533 systemd[1]: Created slice kubepods-besteffort-pod25718d5a_b53c_4bbb_836d_c6b067464910.slice - libcontainer container kubepods-besteffort-pod25718d5a_b53c_4bbb_836d_c6b067464910.slice. Feb 13 19:57:48.029584 systemd[1]: Created slice kubepods-besteffort-podcf8c82a9_44b0_425a_a89e_0a108fe1f532.slice - libcontainer container kubepods-besteffort-podcf8c82a9_44b0_425a_a89e_0a108fe1f532.slice. Feb 13 19:57:48.046470 kubelet[2433]: I0213 19:57:48.046359 2433 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:57:48.069684 kubelet[2433]: I0213 19:57:48.069636 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25718d5a-b53c-4bbb-836d-c6b067464910-node-certs\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.069883 kubelet[2433]: I0213 19:57:48.069727 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-cni-log-dir\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.069883 kubelet[2433]: I0213 19:57:48.069757 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-flexvol-driver-host\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.069883 kubelet[2433]: I0213 19:57:48.069785 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvqw\" (UniqueName: \"kubernetes.io/projected/86c89c03-bbc5-4c29-8bcf-5f822f6653f5-kube-api-access-ptvqw\") pod \"csi-node-driver-rdxqp\" (UID: \"86c89c03-bbc5-4c29-8bcf-5f822f6653f5\") " pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:57:48.069883 kubelet[2433]: I0213 19:57:48.069806 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf8c82a9-44b0-425a-a89e-0a108fe1f532-lib-modules\") pod \"kube-proxy-ksp5s\" (UID: \"cf8c82a9-44b0-425a-a89e-0a108fe1f532\") " pod="kube-system/kube-proxy-ksp5s" Feb 13 19:57:48.069883 kubelet[2433]: I0213 19:57:48.069829 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c89c03-bbc5-4c29-8bcf-5f822f6653f5-kubelet-dir\") pod \"csi-node-driver-rdxqp\" (UID: \"86c89c03-bbc5-4c29-8bcf-5f822f6653f5\") " pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:57:48.070078 kubelet[2433]: I0213 19:57:48.069851 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/86c89c03-bbc5-4c29-8bcf-5f822f6653f5-registration-dir\") pod \"csi-node-driver-rdxqp\" (UID: \"86c89c03-bbc5-4c29-8bcf-5f822f6653f5\") " pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:57:48.070078 kubelet[2433]: I0213 19:57:48.069872 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-policysync\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070078 kubelet[2433]: I0213 19:57:48.069896 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-var-lib-calico\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070078 kubelet[2433]: I0213 19:57:48.069921 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-cni-bin-dir\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070078 kubelet[2433]: I0213 19:57:48.069947 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-cni-net-dir\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070256 kubelet[2433]: I0213 19:57:48.069973 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7nj\" (UniqueName: \"kubernetes.io/projected/25718d5a-b53c-4bbb-836d-c6b067464910-kube-api-access-6g7nj\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070256 kubelet[2433]: I0213 19:57:48.069999 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/86c89c03-bbc5-4c29-8bcf-5f822f6653f5-varrun\") pod \"csi-node-driver-rdxqp\" (UID: \"86c89c03-bbc5-4c29-8bcf-5f822f6653f5\") " pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:57:48.070256 kubelet[2433]: I0213 19:57:48.070023 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-lib-modules\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070256 kubelet[2433]: I0213 19:57:48.070049 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-xtables-lock\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070256 kubelet[2433]: I0213 19:57:48.070076 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf8c82a9-44b0-425a-a89e-0a108fe1f532-kube-proxy\") pod \"kube-proxy-ksp5s\" (UID: \"cf8c82a9-44b0-425a-a89e-0a108fe1f532\") " pod="kube-system/kube-proxy-ksp5s" Feb 13 19:57:48.070462 kubelet[2433]: I0213 19:57:48.070103 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6pv\" (UniqueName: \"kubernetes.io/projected/cf8c82a9-44b0-425a-a89e-0a108fe1f532-kube-api-access-tj6pv\") pod \"kube-proxy-ksp5s\" (UID: \"cf8c82a9-44b0-425a-a89e-0a108fe1f532\") " pod="kube-system/kube-proxy-ksp5s" Feb 13 19:57:48.070462 kubelet[2433]: I0213 19:57:48.070128 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25718d5a-b53c-4bbb-836d-c6b067464910-tigera-ca-bundle\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070462 kubelet[2433]: I0213 19:57:48.070157 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25718d5a-b53c-4bbb-836d-c6b067464910-var-run-calico\") pod \"calico-node-4s8tt\" (UID: \"25718d5a-b53c-4bbb-836d-c6b067464910\") " pod="calico-system/calico-node-4s8tt" Feb 13 19:57:48.070462 kubelet[2433]: I0213 19:57:48.070182 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/86c89c03-bbc5-4c29-8bcf-5f822f6653f5-socket-dir\") pod \"csi-node-driver-rdxqp\" (UID: \"86c89c03-bbc5-4c29-8bcf-5f822f6653f5\") " pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:57:48.070462 kubelet[2433]: I0213 19:57:48.070206 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf8c82a9-44b0-425a-a89e-0a108fe1f532-xtables-lock\") pod \"kube-proxy-ksp5s\" (UID: \"cf8c82a9-44b0-425a-a89e-0a108fe1f532\") " pod="kube-system/kube-proxy-ksp5s" Feb 13 19:57:48.174320 kubelet[2433]: E0213 19:57:48.174257 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.174320 kubelet[2433]: W0213 19:57:48.174306 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.174614 kubelet[2433]: E0213 19:57:48.174336 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.174614 kubelet[2433]: E0213 19:57:48.174577 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.174614 kubelet[2433]: W0213 19:57:48.174590 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.174614 kubelet[2433]: E0213 19:57:48.174607 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.175145 kubelet[2433]: E0213 19:57:48.174790 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.175145 kubelet[2433]: W0213 19:57:48.174803 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.175145 kubelet[2433]: E0213 19:57:48.174819 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.175332 kubelet[2433]: E0213 19:57:48.175275 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.175332 kubelet[2433]: W0213 19:57:48.175288 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.175332 kubelet[2433]: E0213 19:57:48.175304 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.176809 kubelet[2433]: E0213 19:57:48.176306 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.176809 kubelet[2433]: W0213 19:57:48.176324 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.176809 kubelet[2433]: E0213 19:57:48.176340 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.176809 kubelet[2433]: E0213 19:57:48.176592 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.176809 kubelet[2433]: W0213 19:57:48.176607 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.176809 kubelet[2433]: E0213 19:57:48.176622 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.177104 kubelet[2433]: E0213 19:57:48.176821 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.177104 kubelet[2433]: W0213 19:57:48.176832 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.177104 kubelet[2433]: E0213 19:57:48.176845 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.177226 kubelet[2433]: E0213 19:57:48.177109 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.177226 kubelet[2433]: W0213 19:57:48.177121 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.177226 kubelet[2433]: E0213 19:57:48.177134 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.190068 kubelet[2433]: E0213 19:57:48.187208 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.190068 kubelet[2433]: W0213 19:57:48.187237 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.190068 kubelet[2433]: E0213 19:57:48.187260 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.190068 kubelet[2433]: E0213 19:57:48.188892 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.190068 kubelet[2433]: W0213 19:57:48.188911 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.190068 kubelet[2433]: E0213 19:57:48.188930 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.192763 kubelet[2433]: E0213 19:57:48.191562 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.192763 kubelet[2433]: W0213 19:57:48.191578 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.192763 kubelet[2433]: E0213 19:57:48.191596 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.192763 kubelet[2433]: E0213 19:57:48.192580 2433 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:57:48.192763 kubelet[2433]: W0213 19:57:48.192594 2433 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:57:48.192763 kubelet[2433]: E0213 19:57:48.192610 2433 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:57:48.283029 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 19:57:48.327877 containerd[1731]: time="2025-02-13T19:57:48.327705174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4s8tt,Uid:25718d5a-b53c-4bbb-836d-c6b067464910,Namespace:calico-system,Attempt:0,}" Feb 13 19:57:48.334316 containerd[1731]: time="2025-02-13T19:57:48.334269932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksp5s,Uid:cf8c82a9-44b0-425a-a89e-0a108fe1f532,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:48.909830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091236009.mount: Deactivated successfully. Feb 13 19:57:48.928880 containerd[1731]: time="2025-02-13T19:57:48.928815881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:48.933085 containerd[1731]: time="2025-02-13T19:57:48.932995722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 19:57:48.935259 containerd[1731]: time="2025-02-13T19:57:48.935216444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:48.937374 containerd[1731]: time="2025-02-13T19:57:48.937336964Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:48.939235 containerd[1731]: time="2025-02-13T19:57:48.939181582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:57:48.943451 containerd[1731]: time="2025-02-13T19:57:48.943400723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:48.944972 containerd[1731]: time="2025-02-13T19:57:48.944244032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.409156ms" Feb 13 19:57:48.948938 containerd[1731]: time="2025-02-13T19:57:48.948903677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.525744ms" Feb 13 19:57:49.008337 kubelet[2433]: E0213 19:57:49.008265 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:49.223218 containerd[1731]: time="2025-02-13T19:57:49.223102653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:49.224102 containerd[1731]: time="2025-02-13T19:57:49.223236754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:49.224102 containerd[1731]: time="2025-02-13T19:57:49.223267755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:49.224102 containerd[1731]: time="2025-02-13T19:57:49.223400156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:49.231583 containerd[1731]: time="2025-02-13T19:57:49.228049901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:49.231583 containerd[1731]: time="2025-02-13T19:57:49.228106802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:49.231583 containerd[1731]: time="2025-02-13T19:57:49.228122102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:49.231583 containerd[1731]: time="2025-02-13T19:57:49.228198403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:49.371576 systemd[1]: Started cri-containerd-60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01.scope - libcontainer container 60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01. Feb 13 19:57:49.373192 systemd[1]: Started cri-containerd-8f55c4cad87668c87980cde68410f692dd362b52fca74b8624e072ccd4e589f7.scope - libcontainer container 8f55c4cad87668c87980cde68410f692dd362b52fca74b8624e072ccd4e589f7. Feb 13 19:57:49.414493 containerd[1731]: time="2025-02-13T19:57:49.414375120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4s8tt,Uid:25718d5a-b53c-4bbb-836d-c6b067464910,Namespace:calico-system,Attempt:0,} returns sandbox id \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\"" Feb 13 19:57:49.418781 containerd[1731]: time="2025-02-13T19:57:49.418054256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:57:49.420912 containerd[1731]: time="2025-02-13T19:57:49.420852983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksp5s,Uid:cf8c82a9-44b0-425a-a89e-0a108fe1f532,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f55c4cad87668c87980cde68410f692dd362b52fca74b8624e072ccd4e589f7\"" Feb 13 19:57:50.009455 kubelet[2433]: E0213 19:57:50.009378 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:50.140752 kubelet[2433]: E0213 19:57:50.140684 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:57:50.733277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356058248.mount: Deactivated successfully. Feb 13 19:57:50.888193 containerd[1731]: time="2025-02-13T19:57:50.888122303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:50.890322 containerd[1731]: time="2025-02-13T19:57:50.890229223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:57:50.894579 containerd[1731]: time="2025-02-13T19:57:50.894513465Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:50.902163 containerd[1731]: time="2025-02-13T19:57:50.901773836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:50.903516 containerd[1731]: time="2025-02-13T19:57:50.902842446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.48471779s" Feb 13 19:57:50.903516 containerd[1731]: time="2025-02-13T19:57:50.902890847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:57:50.905215 containerd[1731]: time="2025-02-13T19:57:50.904687364Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:57:50.906223 containerd[1731]: time="2025-02-13T19:57:50.906192179Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:57:50.942520 containerd[1731]: time="2025-02-13T19:57:50.942469633Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0\"" Feb 13 19:57:50.943330 containerd[1731]: time="2025-02-13T19:57:50.943285141Z" level=info msg="StartContainer for \"efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0\"" Feb 13 19:57:50.977624 systemd[1]: Started cri-containerd-efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0.scope - libcontainer container efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0. Feb 13 19:57:51.010720 kubelet[2433]: E0213 19:57:51.010553 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:51.018539 containerd[1731]: time="2025-02-13T19:57:51.018226572Z" level=info msg="StartContainer for \"efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0\" returns successfully" Feb 13 19:57:51.024724 systemd[1]: cri-containerd-efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0.scope: Deactivated successfully. Feb 13 19:57:51.224607 containerd[1731]: time="2025-02-13T19:57:51.224490085Z" level=info msg="shim disconnected" id=efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0 namespace=k8s.io Feb 13 19:57:51.224607 containerd[1731]: time="2025-02-13T19:57:51.224575586Z" level=warning msg="cleaning up after shim disconnected" id=efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0 namespace=k8s.io Feb 13 19:57:51.224607 containerd[1731]: time="2025-02-13T19:57:51.224590686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:51.682567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efdf394c896b27d00d86576beaeffea452102e0a5c2110b31f74c7562e29c1e0-rootfs.mount: Deactivated successfully. Feb 13 19:57:52.011536 kubelet[2433]: E0213 19:57:52.011420 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:52.116894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579013359.mount: Deactivated successfully. Feb 13 19:57:52.140971 kubelet[2433]: E0213 19:57:52.140883 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:57:52.654507 containerd[1731]: time="2025-02-13T19:57:52.654438041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:52.657244 containerd[1731]: time="2025-02-13T19:57:52.657172367Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 13 19:57:52.660750 containerd[1731]: time="2025-02-13T19:57:52.660689302Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:52.664523 containerd[1731]: time="2025-02-13T19:57:52.664460739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:52.665558 containerd[1731]: time="2025-02-13T19:57:52.665083745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.76035808s" Feb 13 19:57:52.665558 containerd[1731]: time="2025-02-13T19:57:52.665128445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:57:52.666446 containerd[1731]: time="2025-02-13T19:57:52.666303857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:57:52.667881 containerd[1731]: time="2025-02-13T19:57:52.667852372Z" level=info msg="CreateContainer within sandbox \"8f55c4cad87668c87980cde68410f692dd362b52fca74b8624e072ccd4e589f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:57:52.698997 containerd[1731]: time="2025-02-13T19:57:52.698940575Z" level=info msg="CreateContainer within sandbox \"8f55c4cad87668c87980cde68410f692dd362b52fca74b8624e072ccd4e589f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58b339956c882acaaf2543e12ab1a1b11aa4238b557e8c5901cf285632f9fc0f\"" Feb 13 19:57:52.699602 containerd[1731]: time="2025-02-13T19:57:52.699555181Z" level=info msg="StartContainer for \"58b339956c882acaaf2543e12ab1a1b11aa4238b557e8c5901cf285632f9fc0f\"" Feb 13 19:57:52.732919 systemd[1]: Started cri-containerd-58b339956c882acaaf2543e12ab1a1b11aa4238b557e8c5901cf285632f9fc0f.scope - libcontainer container 58b339956c882acaaf2543e12ab1a1b11aa4238b557e8c5901cf285632f9fc0f. Feb 13 19:57:52.767101 containerd[1731]: time="2025-02-13T19:57:52.766918839Z" level=info msg="StartContainer for \"58b339956c882acaaf2543e12ab1a1b11aa4238b557e8c5901cf285632f9fc0f\" returns successfully" Feb 13 19:57:53.011728 kubelet[2433]: E0213 19:57:53.011655 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:53.204298 kubelet[2433]: I0213 19:57:53.204198 2433 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ksp5s" podStartSLOduration=2.960757652 podStartE2EDuration="6.204174306s" podCreationTimestamp="2025-02-13 19:57:47 +0000 UTC" firstStartedPulling="2025-02-13 19:57:49.422712601 +0000 UTC m=+3.165482009" lastFinishedPulling="2025-02-13 19:57:52.666129155 +0000 UTC m=+6.408898663" observedRunningTime="2025-02-13 19:57:53.203989104 +0000 UTC m=+6.946758512" watchObservedRunningTime="2025-02-13 19:57:53.204174306 +0000 UTC m=+6.946943814" Feb 13 19:57:54.012486 kubelet[2433]: E0213 19:57:54.012360 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:54.140785 kubelet[2433]: E0213 19:57:54.139944 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:57:55.013339 kubelet[2433]: E0213 19:57:55.013247 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:56.013980 kubelet[2433]: E0213 19:57:56.013852 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:56.142300 kubelet[2433]: E0213 19:57:56.141753 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:57:56.544608 containerd[1731]: time="2025-02-13T19:57:56.544546806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:56.546468 containerd[1731]: time="2025-02-13T19:57:56.546404724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:57:56.549451 containerd[1731]: time="2025-02-13T19:57:56.549372453Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:56.554239 containerd[1731]: time="2025-02-13T19:57:56.554175900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:56.555437 containerd[1731]: time="2025-02-13T19:57:56.554901007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.88855935s" Feb 13 19:57:56.555437 containerd[1731]: time="2025-02-13T19:57:56.554940907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:57:56.557320 containerd[1731]: time="2025-02-13T19:57:56.557286330Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:57:56.587921 containerd[1731]: time="2025-02-13T19:57:56.587875629Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe\"" Feb 13 19:57:56.588514 containerd[1731]: time="2025-02-13T19:57:56.588352033Z" level=info msg="StartContainer for \"a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe\"" Feb 13 19:57:56.621572 systemd[1]: Started cri-containerd-a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe.scope - libcontainer container a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe. Feb 13 19:57:56.655544 containerd[1731]: time="2025-02-13T19:57:56.655450788Z" level=info msg="StartContainer for \"a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe\" returns successfully" Feb 13 19:57:57.014680 kubelet[2433]: E0213 19:57:57.014630 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:58.015552 kubelet[2433]: E0213 19:57:58.015479 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:57:58.087019 systemd[1]: cri-containerd-a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe.scope: Deactivated successfully. Feb 13 19:57:58.109530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe-rootfs.mount: Deactivated successfully. Feb 13 19:57:58.126020 kubelet[2433]: I0213 19:57:58.124057 2433 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:57:58.145949 systemd[1]: Created slice kubepods-besteffort-pod86c89c03_bbc5_4c29_8bcf_5f822f6653f5.slice - libcontainer container kubepods-besteffort-pod86c89c03_bbc5_4c29_8bcf_5f822f6653f5.slice. Feb 13 19:57:58.253800 containerd[1731]: time="2025-02-13T19:57:58.253299545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:0,}" Feb 13 19:57:59.016625 kubelet[2433]: E0213 19:57:59.016549 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:00.017747 kubelet[2433]: E0213 19:58:00.017652 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:00.530413 containerd[1731]: time="2025-02-13T19:58:00.528320293Z" level=info msg="shim disconnected" id=a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe namespace=k8s.io Feb 13 19:58:00.530413 containerd[1731]: time="2025-02-13T19:58:00.528524295Z" level=warning msg="cleaning up after shim disconnected" id=a0aaccbca691a1a094aaa3db816506e45ffdecfaa7ba46f177286e76c273aefe namespace=k8s.io Feb 13 19:58:00.530413 containerd[1731]: time="2025-02-13T19:58:00.528556595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:58:00.574796 containerd[1731]: time="2025-02-13T19:58:00.574731188Z" level=error msg="Failed to destroy network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:00.575661 containerd[1731]: time="2025-02-13T19:58:00.575365193Z" level=error msg="encountered an error cleaning up failed sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:00.575661 containerd[1731]: time="2025-02-13T19:58:00.575501495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:00.576693 kubelet[2433]: E0213 19:58:00.576637 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:00.576822 kubelet[2433]: E0213 19:58:00.576750 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:00.576822 kubelet[2433]: E0213 19:58:00.576787 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:00.576911 kubelet[2433]: E0213 19:58:00.576860 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:00.578075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688-shm.mount: Deactivated successfully. Feb 13 19:58:01.018245 kubelet[2433]: E0213 19:58:01.018170 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:01.215671 kubelet[2433]: I0213 19:58:01.214924 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688" Feb 13 19:58:01.215902 containerd[1731]: time="2025-02-13T19:58:01.215211035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:58:01.215902 containerd[1731]: time="2025-02-13T19:58:01.215850741Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:01.216218 containerd[1731]: time="2025-02-13T19:58:01.216187043Z" level=info msg="Ensure that sandbox 56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688 in task-service has been cleanup successfully" Feb 13 19:58:01.220890 containerd[1731]: time="2025-02-13T19:58:01.216520046Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:01.220890 containerd[1731]: time="2025-02-13T19:58:01.216544147Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:01.220890 containerd[1731]: time="2025-02-13T19:58:01.220481180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:1,}" Feb 13 19:58:01.218862 systemd[1]: run-netns-cni\x2d9b00ee94\x2db6c7\x2d8ed2\x2d812d\x2d18037cede16f.mount: Deactivated successfully. Feb 13 19:58:01.311691 containerd[1731]: time="2025-02-13T19:58:01.308676330Z" level=error msg="Failed to destroy network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:01.311691 containerd[1731]: time="2025-02-13T19:58:01.310682347Z" level=error msg="encountered an error cleaning up failed sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:01.311691 containerd[1731]: time="2025-02-13T19:58:01.310782248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:01.311956 kubelet[2433]: E0213 19:58:01.311132 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:01.311956 kubelet[2433]: E0213 19:58:01.311223 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:01.311956 kubelet[2433]: E0213 19:58:01.311254 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:01.312096 kubelet[2433]: E0213 19:58:01.311336 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:01.312167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b-shm.mount: Deactivated successfully. Feb 13 19:58:02.018863 kubelet[2433]: E0213 19:58:02.018774 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:02.218629 kubelet[2433]: I0213 19:58:02.218584 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b" Feb 13 19:58:02.219930 containerd[1731]: time="2025-02-13T19:58:02.219291775Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:02.219930 containerd[1731]: time="2025-02-13T19:58:02.219649978Z" level=info msg="Ensure that sandbox 9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b in task-service has been cleanup successfully" Feb 13 19:58:02.220947 containerd[1731]: time="2025-02-13T19:58:02.220583186Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:02.220947 containerd[1731]: time="2025-02-13T19:58:02.220672786Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:02.221661 containerd[1731]: time="2025-02-13T19:58:02.221557494Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:02.223017 containerd[1731]: time="2025-02-13T19:58:02.221683295Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:02.223017 containerd[1731]: time="2025-02-13T19:58:02.221706895Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:02.223712 containerd[1731]: time="2025-02-13T19:58:02.223671412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:2,}" Feb 13 19:58:02.224260 systemd[1]: run-netns-cni\x2dcda17668\x2d23ee\x2d4c10\x2d6924\x2dab83f20dbb68.mount: Deactivated successfully. Feb 13 19:58:02.316072 containerd[1731]: time="2025-02-13T19:58:02.314574685Z" level=error msg="Failed to destroy network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:02.316072 containerd[1731]: time="2025-02-13T19:58:02.314965888Z" level=error msg="encountered an error cleaning up failed sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:02.316072 containerd[1731]: time="2025-02-13T19:58:02.315053589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:02.316348 kubelet[2433]: E0213 19:58:02.315410 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:02.316348 kubelet[2433]: E0213 19:58:02.315511 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:02.316348 kubelet[2433]: E0213 19:58:02.315549 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:02.316582 kubelet[2433]: E0213 19:58:02.315617 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:02.318178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6-shm.mount: Deactivated successfully. Feb 13 19:58:02.900588 systemd[1]: Created slice kubepods-besteffort-podcdc023d6_7b51_4b02_8664_cf9e0c3e7acb.slice - libcontainer container kubepods-besteffort-podcdc023d6_7b51_4b02_8664_cf9e0c3e7acb.slice. Feb 13 19:58:02.969640 kubelet[2433]: I0213 19:58:02.969535 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj6c\" (UniqueName: \"kubernetes.io/projected/cdc023d6-7b51-4b02-8664-cf9e0c3e7acb-kube-api-access-dsj6c\") pod \"nginx-deployment-7fcdb87857-pxg7l\" (UID: \"cdc023d6-7b51-4b02-8664-cf9e0c3e7acb\") " pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:03.019575 kubelet[2433]: E0213 19:58:03.019480 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:03.204801 containerd[1731]: time="2025-02-13T19:58:03.204737056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:0,}" Feb 13 19:58:03.221873 kubelet[2433]: I0213 19:58:03.221837 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6" Feb 13 19:58:03.222816 containerd[1731]: time="2025-02-13T19:58:03.222487607Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:03.223355 containerd[1731]: time="2025-02-13T19:58:03.222821010Z" level=info msg="Ensure that sandbox 277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6 in task-service has been cleanup successfully" Feb 13 19:58:03.225268 containerd[1731]: time="2025-02-13T19:58:03.225229730Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:03.225268 containerd[1731]: time="2025-02-13T19:58:03.225264030Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.225623733Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.225768435Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.225781335Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.226152438Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.226250139Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:03.226420 containerd[1731]: time="2025-02-13T19:58:03.226265239Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:03.227766 containerd[1731]: time="2025-02-13T19:58:03.226816643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:3,}" Feb 13 19:58:03.226997 systemd[1]: run-netns-cni\x2d909207de\x2d6a82\x2d2c50\x2d160c\x2de8b199307c2e.mount: Deactivated successfully. Feb 13 19:58:04.020750 kubelet[2433]: E0213 19:58:04.020668 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:04.418705 containerd[1731]: time="2025-02-13T19:58:04.418503379Z" level=error msg="Failed to destroy network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.422460 containerd[1731]: time="2025-02-13T19:58:04.421369503Z" level=error msg="encountered an error cleaning up failed sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.422460 containerd[1731]: time="2025-02-13T19:58:04.421490704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.422715 kubelet[2433]: E0213 19:58:04.421907 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.422715 kubelet[2433]: E0213 19:58:04.421998 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:04.422715 kubelet[2433]: E0213 19:58:04.422032 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:04.422889 kubelet[2433]: E0213 19:58:04.422100 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pxg7l" podUID="cdc023d6-7b51-4b02-8664-cf9e0c3e7acb" Feb 13 19:58:04.426274 containerd[1731]: time="2025-02-13T19:58:04.426063843Z" level=error msg="Failed to destroy network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.426530 containerd[1731]: time="2025-02-13T19:58:04.426493747Z" level=error msg="encountered an error cleaning up failed sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.426623 containerd[1731]: time="2025-02-13T19:58:04.426587447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.426877 kubelet[2433]: E0213 19:58:04.426836 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:04.427033 kubelet[2433]: E0213 19:58:04.426915 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:04.427033 kubelet[2433]: E0213 19:58:04.426951 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:04.427115 kubelet[2433]: E0213 19:58:04.427044 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:05.020955 kubelet[2433]: E0213 19:58:05.020869 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:05.228024 kubelet[2433]: I0213 19:58:05.227238 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3" Feb 13 19:58:05.228521 containerd[1731]: time="2025-02-13T19:58:05.228350739Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:05.228841 containerd[1731]: time="2025-02-13T19:58:05.228698441Z" level=info msg="Ensure that sandbox 55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3 in task-service has been cleanup successfully" Feb 13 19:58:05.228931 containerd[1731]: time="2025-02-13T19:58:05.228902442Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:05.228931 containerd[1731]: time="2025-02-13T19:58:05.228922542Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:05.229960 containerd[1731]: time="2025-02-13T19:58:05.229733047Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:05.229960 containerd[1731]: time="2025-02-13T19:58:05.229833947Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:05.229960 containerd[1731]: time="2025-02-13T19:58:05.229852747Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:05.230459 containerd[1731]: time="2025-02-13T19:58:05.230352750Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:05.230742 containerd[1731]: time="2025-02-13T19:58:05.230694752Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:05.230742 containerd[1731]: time="2025-02-13T19:58:05.230716652Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:05.231855 kubelet[2433]: I0213 19:58:05.231117 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec" Feb 13 19:58:05.231959 containerd[1731]: time="2025-02-13T19:58:05.231663358Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:05.231959 containerd[1731]: time="2025-02-13T19:58:05.231699258Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:05.231959 containerd[1731]: time="2025-02-13T19:58:05.231783458Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:05.231959 containerd[1731]: time="2025-02-13T19:58:05.231795758Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:05.231959 containerd[1731]: time="2025-02-13T19:58:05.231924859Z" level=info msg="Ensure that sandbox 2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec in task-service has been cleanup successfully" Feb 13 19:58:05.232555 containerd[1731]: time="2025-02-13T19:58:05.232526262Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:05.232628 containerd[1731]: time="2025-02-13T19:58:05.232563963Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:05.233068 containerd[1731]: time="2025-02-13T19:58:05.233038265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:1,}" Feb 13 19:58:05.236443 containerd[1731]: time="2025-02-13T19:58:05.236413984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:4,}" Feb 13 19:58:05.240603 systemd[1]: run-netns-cni\x2dea9cab60\x2d160b\x2dc284\x2d9397\x2d678c79f975f3.mount: Deactivated successfully. Feb 13 19:58:05.240756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3-shm.mount: Deactivated successfully. Feb 13 19:58:05.240849 systemd[1]: run-netns-cni\x2d7deb7a2e\x2d8f86\x2d6300\x2d8729\x2d9f6d6ee81b33.mount: Deactivated successfully. Feb 13 19:58:05.240925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec-shm.mount: Deactivated successfully. Feb 13 19:58:05.464040 containerd[1731]: time="2025-02-13T19:58:05.463481063Z" level=error msg="Failed to destroy network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.464633 containerd[1731]: time="2025-02-13T19:58:05.464520469Z" level=error msg="encountered an error cleaning up failed sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.464633 containerd[1731]: time="2025-02-13T19:58:05.464618569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466582 kubelet[2433]: E0213 19:58:05.465006 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466582 kubelet[2433]: E0213 19:58:05.465118 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:05.466582 kubelet[2433]: E0213 19:58:05.465156 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:05.466817 containerd[1731]: time="2025-02-13T19:58:05.465789176Z" level=error msg="Failed to destroy network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466817 containerd[1731]: time="2025-02-13T19:58:05.466119778Z" level=error msg="encountered an error cleaning up failed sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466817 containerd[1731]: time="2025-02-13T19:58:05.466201278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466952 kubelet[2433]: E0213 19:58:05.465245 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pxg7l" podUID="cdc023d6-7b51-4b02-8664-cf9e0c3e7acb" Feb 13 19:58:05.466952 kubelet[2433]: E0213 19:58:05.466461 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:05.466952 kubelet[2433]: E0213 19:58:05.466584 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:05.467135 kubelet[2433]: E0213 19:58:05.466612 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:05.467135 kubelet[2433]: E0213 19:58:05.466673 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:05.468341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347-shm.mount: Deactivated successfully. Feb 13 19:58:06.021674 kubelet[2433]: E0213 19:58:06.021551 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:06.240094 kubelet[2433]: I0213 19:58:06.237188 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776" Feb 13 19:58:06.238487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776-shm.mount: Deactivated successfully. Feb 13 19:58:06.243002 containerd[1731]: time="2025-02-13T19:58:06.242956252Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:06.243697 kubelet[2433]: I0213 19:58:06.243265 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347" Feb 13 19:58:06.244102 containerd[1731]: time="2025-02-13T19:58:06.244071958Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:06.244381 containerd[1731]: time="2025-02-13T19:58:06.244292659Z" level=info msg="Ensure that sandbox ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347 in task-service has been cleanup successfully" Feb 13 19:58:06.246434 containerd[1731]: time="2025-02-13T19:58:06.244089358Z" level=info msg="Ensure that sandbox 6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776 in task-service has been cleanup successfully" Feb 13 19:58:06.247104 systemd[1]: run-netns-cni\x2d934a4c03\x2dad57\x2dd30a\x2d5e59\x2d5cc49f5f44e9.mount: Deactivated successfully. Feb 13 19:58:06.250581 containerd[1731]: time="2025-02-13T19:58:06.249938591Z" level=info msg="TearDown network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" successfully" Feb 13 19:58:06.250581 containerd[1731]: time="2025-02-13T19:58:06.249972491Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" returns successfully" Feb 13 19:58:06.250581 containerd[1731]: time="2025-02-13T19:58:06.245472666Z" level=info msg="TearDown network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" successfully" Feb 13 19:58:06.250581 containerd[1731]: time="2025-02-13T19:58:06.250047692Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" returns successfully" Feb 13 19:58:06.251614 containerd[1731]: time="2025-02-13T19:58:06.251119498Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:06.251614 containerd[1731]: time="2025-02-13T19:58:06.251216598Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:06.251614 containerd[1731]: time="2025-02-13T19:58:06.251233798Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:06.251625 systemd[1]: run-netns-cni\x2d962f0d22\x2dff93\x2da222\x2de086\x2ddfdd34f226bc.mount: Deactivated successfully. Feb 13 19:58:06.253084 containerd[1731]: time="2025-02-13T19:58:06.252927408Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:06.253084 containerd[1731]: time="2025-02-13T19:58:06.253023408Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:06.253540 containerd[1731]: time="2025-02-13T19:58:06.253107909Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:06.253540 containerd[1731]: time="2025-02-13T19:58:06.253122609Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:06.254692 containerd[1731]: time="2025-02-13T19:58:06.254498417Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:06.254692 containerd[1731]: time="2025-02-13T19:58:06.254522817Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:06.255295 containerd[1731]: time="2025-02-13T19:58:06.255004819Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:06.255295 containerd[1731]: time="2025-02-13T19:58:06.255024320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:2,}" Feb 13 19:58:06.255295 containerd[1731]: time="2025-02-13T19:58:06.255093520Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:06.255295 containerd[1731]: time="2025-02-13T19:58:06.255106320Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:06.257409 containerd[1731]: time="2025-02-13T19:58:06.255615523Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:06.257409 containerd[1731]: time="2025-02-13T19:58:06.255707023Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:06.257409 containerd[1731]: time="2025-02-13T19:58:06.255720323Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:06.259641 containerd[1731]: time="2025-02-13T19:58:06.259611745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:5,}" Feb 13 19:58:06.452109 containerd[1731]: time="2025-02-13T19:58:06.451906128Z" level=error msg="Failed to destroy network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.452678 containerd[1731]: time="2025-02-13T19:58:06.452376531Z" level=error msg="encountered an error cleaning up failed sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.452678 containerd[1731]: time="2025-02-13T19:58:06.452501431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.452914 kubelet[2433]: E0213 19:58:06.452861 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.453120 kubelet[2433]: E0213 19:58:06.453049 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:06.453454 kubelet[2433]: E0213 19:58:06.453413 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:06.453873 kubelet[2433]: E0213 19:58:06.453528 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:06.465975 containerd[1731]: time="2025-02-13T19:58:06.465697906Z" level=error msg="Failed to destroy network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.466693 containerd[1731]: time="2025-02-13T19:58:06.466128408Z" level=error msg="encountered an error cleaning up failed sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.466693 containerd[1731]: time="2025-02-13T19:58:06.466219209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.467132 kubelet[2433]: E0213 19:58:06.466569 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:06.467132 kubelet[2433]: E0213 19:58:06.466648 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:06.467132 kubelet[2433]: E0213 19:58:06.466703 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:06.467328 kubelet[2433]: E0213 19:58:06.466774 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pxg7l" podUID="cdc023d6-7b51-4b02-8664-cf9e0c3e7acb" Feb 13 19:58:07.006186 kubelet[2433]: E0213 19:58:07.006058 2433 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:07.021940 kubelet[2433]: E0213 19:58:07.021798 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:07.241571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed-shm.mount: Deactivated successfully. Feb 13 19:58:07.241711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3-shm.mount: Deactivated successfully. Feb 13 19:58:07.268517 kubelet[2433]: I0213 19:58:07.267274 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed" Feb 13 19:58:07.275420 containerd[1731]: time="2025-02-13T19:58:07.271754044Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" Feb 13 19:58:07.275420 containerd[1731]: time="2025-02-13T19:58:07.272075246Z" level=info msg="Ensure that sandbox f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed in task-service has been cleanup successfully" Feb 13 19:58:07.277545 systemd[1]: run-netns-cni\x2d6ba947e3\x2dc4d2\x2d168d\x2dd523\x2da3462f70a152.mount: Deactivated successfully. Feb 13 19:58:07.279209 containerd[1731]: time="2025-02-13T19:58:07.279179686Z" level=info msg="TearDown network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" successfully" Feb 13 19:58:07.279662 containerd[1731]: time="2025-02-13T19:58:07.279636189Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" returns successfully" Feb 13 19:58:07.280581 containerd[1731]: time="2025-02-13T19:58:07.280557894Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:07.280763 containerd[1731]: time="2025-02-13T19:58:07.280743895Z" level=info msg="TearDown network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" successfully" Feb 13 19:58:07.281053 containerd[1731]: time="2025-02-13T19:58:07.280888096Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" returns successfully" Feb 13 19:58:07.281508 kubelet[2433]: I0213 19:58:07.281485 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3" Feb 13 19:58:07.283412 containerd[1731]: time="2025-02-13T19:58:07.282346304Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:07.283412 containerd[1731]: time="2025-02-13T19:58:07.282473805Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" Feb 13 19:58:07.283412 containerd[1731]: time="2025-02-13T19:58:07.282714006Z" level=info msg="Ensure that sandbox 73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3 in task-service has been cleanup successfully" Feb 13 19:58:07.283412 containerd[1731]: time="2025-02-13T19:58:07.282823406Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:07.283412 containerd[1731]: time="2025-02-13T19:58:07.282839207Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:07.283814 containerd[1731]: time="2025-02-13T19:58:07.283790912Z" level=info msg="TearDown network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" successfully" Feb 13 19:58:07.285195 systemd[1]: run-netns-cni\x2dd0e5d6c8\x2d70fe\x2d6e9a\x2defa5\x2d69dafd8b66e6.mount: Deactivated successfully. Feb 13 19:58:07.285686 containerd[1731]: time="2025-02-13T19:58:07.285429821Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" returns successfully" Feb 13 19:58:07.285820 containerd[1731]: time="2025-02-13T19:58:07.285795023Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:07.285913 containerd[1731]: time="2025-02-13T19:58:07.285893324Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:07.285955 containerd[1731]: time="2025-02-13T19:58:07.285914324Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:07.286004 containerd[1731]: time="2025-02-13T19:58:07.285986724Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286061425Z" level=info msg="TearDown network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" successfully" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286078125Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" returns successfully" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286559628Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286645228Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286659728Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286726028Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286799929Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:07.287223 containerd[1731]: time="2025-02-13T19:58:07.286811429Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:07.287556 containerd[1731]: time="2025-02-13T19:58:07.287520033Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:07.288619 containerd[1731]: time="2025-02-13T19:58:07.287611333Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:07.288619 containerd[1731]: time="2025-02-13T19:58:07.287631334Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:07.288840 containerd[1731]: time="2025-02-13T19:58:07.288735940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:3,}" Feb 13 19:58:07.296449 containerd[1731]: time="2025-02-13T19:58:07.296267882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:6,}" Feb 13 19:58:07.466769 containerd[1731]: time="2025-02-13T19:58:07.466672442Z" level=error msg="Failed to destroy network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.468120 containerd[1731]: time="2025-02-13T19:58:07.467773048Z" level=error msg="encountered an error cleaning up failed sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.468120 containerd[1731]: time="2025-02-13T19:58:07.467874448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.468507 kubelet[2433]: E0213 19:58:07.468164 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.468507 kubelet[2433]: E0213 19:58:07.468251 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:07.468507 kubelet[2433]: E0213 19:58:07.468284 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdxqp" Feb 13 19:58:07.468844 kubelet[2433]: E0213 19:58:07.468349 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdxqp_calico-system(86c89c03-bbc5-4c29-8bcf-5f822f6653f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdxqp" podUID="86c89c03-bbc5-4c29-8bcf-5f822f6653f5" Feb 13 19:58:07.478774 containerd[1731]: time="2025-02-13T19:58:07.478159506Z" level=error msg="Failed to destroy network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.478774 containerd[1731]: time="2025-02-13T19:58:07.478545308Z" level=error msg="encountered an error cleaning up failed sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.478774 containerd[1731]: time="2025-02-13T19:58:07.478635809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.479049 kubelet[2433]: E0213 19:58:07.478890 2433 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:58:07.479049 kubelet[2433]: E0213 19:58:07.478967 2433 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:07.479049 kubelet[2433]: E0213 19:58:07.478997 2433 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pxg7l" Feb 13 19:58:07.479178 kubelet[2433]: E0213 19:58:07.479055 2433 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pxg7l_default(cdc023d6-7b51-4b02-8664-cf9e0c3e7acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pxg7l" podUID="cdc023d6-7b51-4b02-8664-cf9e0c3e7acb" Feb 13 19:58:07.864783 containerd[1731]: time="2025-02-13T19:58:07.864712383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:07.867244 containerd[1731]: time="2025-02-13T19:58:07.867186097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:58:07.869541 containerd[1731]: time="2025-02-13T19:58:07.869480810Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:07.878773 containerd[1731]: time="2025-02-13T19:58:07.877756956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:07.878773 containerd[1731]: time="2025-02-13T19:58:07.878605561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.663348126s" Feb 13 19:58:07.878773 containerd[1731]: time="2025-02-13T19:58:07.878637661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:58:07.886718 containerd[1731]: time="2025-02-13T19:58:07.886687607Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:58:07.917271 containerd[1731]: time="2025-02-13T19:58:07.917218378Z" level=info msg="CreateContainer within sandbox \"60f34b7890b352702442ad8b87d1d22de2faa2fd3dd1f28c55f192cfe427be01\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb\"" Feb 13 19:58:07.918160 containerd[1731]: time="2025-02-13T19:58:07.918086083Z" level=info msg="StartContainer for \"82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb\"" Feb 13 19:58:07.952611 systemd[1]: Started cri-containerd-82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb.scope - libcontainer container 82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb. Feb 13 19:58:07.989190 containerd[1731]: time="2025-02-13T19:58:07.989076183Z" level=info msg="StartContainer for \"82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb\" returns successfully" Feb 13 19:58:08.022694 kubelet[2433]: E0213 19:58:08.022580 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:08.216141 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:58:08.216338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:58:08.241761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480-shm.mount: Deactivated successfully. Feb 13 19:58:08.242139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708-shm.mount: Deactivated successfully. Feb 13 19:58:08.242350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579481659.mount: Deactivated successfully. Feb 13 19:58:08.297433 kubelet[2433]: I0213 19:58:08.297374 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708" Feb 13 19:58:08.298340 containerd[1731]: time="2025-02-13T19:58:08.298279724Z" level=info msg="StopPodSandbox for \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\"" Feb 13 19:58:08.299880 containerd[1731]: time="2025-02-13T19:58:08.298604926Z" level=info msg="Ensure that sandbox a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708 in task-service has been cleanup successfully" Feb 13 19:58:08.302372 containerd[1731]: time="2025-02-13T19:58:08.302331147Z" level=info msg="TearDown network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" successfully" Feb 13 19:58:08.304413 containerd[1731]: time="2025-02-13T19:58:08.302366447Z" level=info msg="StopPodSandbox for \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" returns successfully" Feb 13 19:58:08.304413 containerd[1731]: time="2025-02-13T19:58:08.302967250Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" Feb 13 19:58:08.304413 containerd[1731]: time="2025-02-13T19:58:08.303077951Z" level=info msg="TearDown network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" successfully" Feb 13 19:58:08.304413 containerd[1731]: time="2025-02-13T19:58:08.303092651Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" returns successfully" Feb 13 19:58:08.304732 containerd[1731]: time="2025-02-13T19:58:08.304705760Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:08.304990 containerd[1731]: time="2025-02-13T19:58:08.304889661Z" level=info msg="TearDown network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" successfully" Feb 13 19:58:08.304990 containerd[1731]: time="2025-02-13T19:58:08.304912561Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" returns successfully" Feb 13 19:58:08.305535 systemd[1]: run-netns-cni\x2df019b824\x2d2bee\x2d3b01\x2dec49\x2d1f18b05d70f6.mount: Deactivated successfully. Feb 13 19:58:08.306870 kubelet[2433]: I0213 19:58:08.305591 2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480" Feb 13 19:58:08.308198 containerd[1731]: time="2025-02-13T19:58:08.307158674Z" level=info msg="StopPodSandbox for \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\"" Feb 13 19:58:08.308198 containerd[1731]: time="2025-02-13T19:58:08.307996679Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:08.308198 containerd[1731]: time="2025-02-13T19:58:08.308094979Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:08.308198 containerd[1731]: time="2025-02-13T19:58:08.308114079Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:08.308525 containerd[1731]: time="2025-02-13T19:58:08.308500982Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:08.308789 containerd[1731]: time="2025-02-13T19:58:08.308769183Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:08.309062 containerd[1731]: time="2025-02-13T19:58:08.308872284Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:08.309062 containerd[1731]: time="2025-02-13T19:58:08.308660982Z" level=info msg="Ensure that sandbox 71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480 in task-service has been cleanup successfully" Feb 13 19:58:08.310495 containerd[1731]: time="2025-02-13T19:58:08.309484987Z" level=info msg="TearDown network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" successfully" Feb 13 19:58:08.310495 containerd[1731]: time="2025-02-13T19:58:08.309509587Z" level=info msg="StopPodSandbox for \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" returns successfully" Feb 13 19:58:08.310605 kubelet[2433]: I0213 19:58:08.310443 2433 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4s8tt" podStartSLOduration=2.847868072 podStartE2EDuration="21.310418792s" podCreationTimestamp="2025-02-13 19:57:47 +0000 UTC" firstStartedPulling="2025-02-13 19:57:49.417141947 +0000 UTC m=+3.159911455" lastFinishedPulling="2025-02-13 19:58:07.879692667 +0000 UTC m=+21.622462175" observedRunningTime="2025-02-13 19:58:08.310190691 +0000 UTC m=+22.052960099" watchObservedRunningTime="2025-02-13 19:58:08.310418792 +0000 UTC m=+22.053188300" Feb 13 19:58:08.310995 containerd[1731]: time="2025-02-13T19:58:08.310830395Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:08.310995 containerd[1731]: time="2025-02-13T19:58:08.310928895Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:08.310995 containerd[1731]: time="2025-02-13T19:58:08.310941795Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:08.312519 systemd[1]: run-netns-cni\x2db2240d2b\x2d38db\x2daebc\x2d53b6\x2df62cd8fc02ed.mount: Deactivated successfully. Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313534010Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313626610Z" level=info msg="TearDown network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" successfully" Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313640010Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" returns successfully" Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313723211Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313793511Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:08.314027 containerd[1731]: time="2025-02-13T19:58:08.313805111Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:08.316019 containerd[1731]: time="2025-02-13T19:58:08.315685022Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:08.316019 containerd[1731]: time="2025-02-13T19:58:08.315736422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:7,}" Feb 13 19:58:08.316019 containerd[1731]: time="2025-02-13T19:58:08.315773422Z" level=info msg="TearDown network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" successfully" Feb 13 19:58:08.316019 containerd[1731]: time="2025-02-13T19:58:08.315786323Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" returns successfully" Feb 13 19:58:08.318526 containerd[1731]: time="2025-02-13T19:58:08.318499138Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:08.319484 containerd[1731]: time="2025-02-13T19:58:08.318686439Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:08.319484 containerd[1731]: time="2025-02-13T19:58:08.318706439Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:08.320078 containerd[1731]: time="2025-02-13T19:58:08.319793445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:4,}" Feb 13 19:58:08.543535 systemd-networkd[1583]: cali973e4cc68a6: Link UP Feb 13 19:58:08.544215 systemd-networkd[1583]: cali973e4cc68a6: Gained carrier Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.413 [INFO][3391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.431 [INFO][3391] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.15-k8s-csi--node--driver--rdxqp-eth0 csi-node-driver- calico-system 86c89c03-bbc5-4c29-8bcf-5f822f6653f5 1269 0 2025-02-13 19:57:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.8.15 csi-node-driver-rdxqp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali973e4cc68a6 [] []}} ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.431 [INFO][3391] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.484 [INFO][3414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" HandleID="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Workload="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.495 [INFO][3414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" HandleID="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Workload="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5840), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.15", "pod":"csi-node-driver-rdxqp", "timestamp":"2025-02-13 19:58:08.484909075 +0000 UTC"}, Hostname:"10.200.8.15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.496 [INFO][3414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.496 [INFO][3414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.496 [INFO][3414] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.15' Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.500 [INFO][3414] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.504 [INFO][3414] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.508 [INFO][3414] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.510 [INFO][3414] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.516 [INFO][3414] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.516 [INFO][3414] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.518 [INFO][3414] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.522 [INFO][3414] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3414] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3414] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" host="10.200.8.15" Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:58:08.559494 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" HandleID="k8s-pod-network.66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Workload="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.532 [INFO][3391] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-csi--node--driver--rdxqp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"86c89c03-bbc5-4c29-8bcf-5f822f6653f5", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"", Pod:"csi-node-driver-rdxqp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali973e4cc68a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.532 [INFO][3391] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.65/32] ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.532 [INFO][3391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali973e4cc68a6 ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.544 [INFO][3391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.545 [INFO][3391] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-csi--node--driver--rdxqp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"86c89c03-bbc5-4c29-8bcf-5f822f6653f5", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b", Pod:"csi-node-driver-rdxqp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali973e4cc68a6", MAC:"d6:a8:39:87:8a:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:08.561161 containerd[1731]: 2025-02-13 19:58:08.557 [INFO][3391] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b" Namespace="calico-system" Pod="csi-node-driver-rdxqp" WorkloadEndpoint="10.200.8.15-k8s-csi--node--driver--rdxqp-eth0" Feb 13 19:58:08.584655 containerd[1731]: time="2025-02-13T19:58:08.584517136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:08.584655 containerd[1731]: time="2025-02-13T19:58:08.584662336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:08.584985 containerd[1731]: time="2025-02-13T19:58:08.584686737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:08.585500 containerd[1731]: time="2025-02-13T19:58:08.585301440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:08.606577 systemd[1]: Started cri-containerd-66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b.scope - libcontainer container 66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b. Feb 13 19:58:08.632088 containerd[1731]: time="2025-02-13T19:58:08.632033203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdxqp,Uid:86c89c03-bbc5-4c29-8bcf-5f822f6653f5,Namespace:calico-system,Attempt:7,} returns sandbox id \"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b\"" Feb 13 19:58:08.634382 containerd[1731]: time="2025-02-13T19:58:08.634117915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:58:08.641831 systemd-networkd[1583]: cali2bb99eaf7c7: Link UP Feb 13 19:58:08.643005 systemd-networkd[1583]: cali2bb99eaf7c7: Gained carrier Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.424 [INFO][3401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.442 [INFO][3401] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0 nginx-deployment-7fcdb87857- default cdc023d6-7b51-4b02-8664-cf9e0c3e7acb 1343 0 2025-02-13 19:58:02 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.15 nginx-deployment-7fcdb87857-pxg7l eth0 default [] [] [kns.default ksa.default.default] cali2bb99eaf7c7 [] []}} ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.442 [INFO][3401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.503 [INFO][3418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" HandleID="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Workload="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.516 [INFO][3418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" HandleID="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Workload="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291440), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.15", "pod":"nginx-deployment-7fcdb87857-pxg7l", "timestamp":"2025-02-13 19:58:08.503467179 +0000 UTC"}, Hostname:"10.200.8.15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.517 [INFO][3418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.529 [INFO][3418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.15' Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.600 [INFO][3418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.605 [INFO][3418] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.610 [INFO][3418] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.613 [INFO][3418] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.615 [INFO][3418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.615 [INFO][3418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.617 [INFO][3418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.628 [INFO][3418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.636 [INFO][3418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.636 [INFO][3418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" host="10.200.8.15" Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.636 [INFO][3418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:58:08.653484 containerd[1731]: 2025-02-13 19:58:08.636 [INFO][3418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" HandleID="k8s-pod-network.84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Workload="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.638 [INFO][3401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"cdc023d6-7b51-4b02-8664-cf9e0c3e7acb", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-pxg7l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2bb99eaf7c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.638 [INFO][3401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.66/32] ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.638 [INFO][3401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bb99eaf7c7 ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.643 [INFO][3401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.644 [INFO][3401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"cdc023d6-7b51-4b02-8664-cf9e0c3e7acb", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae", Pod:"nginx-deployment-7fcdb87857-pxg7l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2bb99eaf7c7", MAC:"22:7f:27:6c:85:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:08.654529 containerd[1731]: 2025-02-13 19:58:08.651 [INFO][3401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae" Namespace="default" Pod="nginx-deployment-7fcdb87857-pxg7l" WorkloadEndpoint="10.200.8.15-k8s-nginx--deployment--7fcdb87857--pxg7l-eth0" Feb 13 19:58:08.680804 containerd[1731]: time="2025-02-13T19:58:08.680572176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:08.680804 containerd[1731]: time="2025-02-13T19:58:08.680625977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:08.680804 containerd[1731]: time="2025-02-13T19:58:08.680637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:08.680804 containerd[1731]: time="2025-02-13T19:58:08.680729777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:08.698610 systemd[1]: Started cri-containerd-84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae.scope - libcontainer container 84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae. Feb 13 19:58:08.739125 containerd[1731]: time="2025-02-13T19:58:08.739074806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pxg7l,Uid:cdc023d6-7b51-4b02-8664-cf9e0c3e7acb,Namespace:default,Attempt:4,} returns sandbox id \"84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae\"" Feb 13 19:58:09.023569 kubelet[2433]: E0213 19:58:09.023506 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:09.840443 kernel: bpftool[3657]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:58:09.871739 systemd-networkd[1583]: cali2bb99eaf7c7: Gained IPv6LL Feb 13 19:58:10.024708 kubelet[2433]: E0213 19:58:10.024649 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:10.161257 systemd-networkd[1583]: vxlan.calico: Link UP Feb 13 19:58:10.161269 systemd-networkd[1583]: vxlan.calico: Gained carrier Feb 13 19:58:10.191552 systemd-networkd[1583]: cali973e4cc68a6: Gained IPv6LL Feb 13 19:58:10.261661 containerd[1731]: time="2025-02-13T19:58:10.261588678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:10.266175 containerd[1731]: time="2025-02-13T19:58:10.266093704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:58:10.271425 containerd[1731]: time="2025-02-13T19:58:10.271045632Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:10.275497 containerd[1731]: time="2025-02-13T19:58:10.275467256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:10.277444 containerd[1731]: time="2025-02-13T19:58:10.277410967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.643241352s" Feb 13 19:58:10.277574 containerd[1731]: time="2025-02-13T19:58:10.277556068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:58:10.281407 containerd[1731]: time="2025-02-13T19:58:10.280042182Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:58:10.281407 containerd[1731]: time="2025-02-13T19:58:10.281312489Z" level=info msg="CreateContainer within sandbox \"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:58:10.322065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780120276.mount: Deactivated successfully. Feb 13 19:58:10.333426 containerd[1731]: time="2025-02-13T19:58:10.331489672Z" level=info msg="CreateContainer within sandbox \"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fe1145267fc724af79418f8f8c5fff5541254e96ddc03b9d4794e80cb19501a8\"" Feb 13 19:58:10.333426 containerd[1731]: time="2025-02-13T19:58:10.332438477Z" level=info msg="StartContainer for \"fe1145267fc724af79418f8f8c5fff5541254e96ddc03b9d4794e80cb19501a8\"" Feb 13 19:58:10.378608 systemd[1]: Started cri-containerd-fe1145267fc724af79418f8f8c5fff5541254e96ddc03b9d4794e80cb19501a8.scope - libcontainer container fe1145267fc724af79418f8f8c5fff5541254e96ddc03b9d4794e80cb19501a8. Feb 13 19:58:10.428492 containerd[1731]: time="2025-02-13T19:58:10.425808503Z" level=info msg="StartContainer for \"fe1145267fc724af79418f8f8c5fff5541254e96ddc03b9d4794e80cb19501a8\" returns successfully" Feb 13 19:58:10.630814 kubelet[2433]: I0213 19:58:10.630742 2433 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:58:11.025473 kubelet[2433]: E0213 19:58:11.025357 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:11.279666 systemd-networkd[1583]: vxlan.calico: Gained IPv6LL Feb 13 19:58:12.025883 kubelet[2433]: E0213 19:58:12.025806 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:13.026612 kubelet[2433]: E0213 19:58:13.026553 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:13.280034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238482721.mount: Deactivated successfully. Feb 13 19:58:14.027314 kubelet[2433]: E0213 19:58:14.027226 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:14.513603 containerd[1731]: time="2025-02-13T19:58:14.513538204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:14.515576 containerd[1731]: time="2025-02-13T19:58:14.515521821Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:58:14.517804 containerd[1731]: time="2025-02-13T19:58:14.517697341Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:14.522756 containerd[1731]: time="2025-02-13T19:58:14.522687285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:14.523902 containerd[1731]: time="2025-02-13T19:58:14.523731094Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.243650712s" Feb 13 19:58:14.523902 containerd[1731]: time="2025-02-13T19:58:14.523770995Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:58:14.525342 containerd[1731]: time="2025-02-13T19:58:14.525129707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:58:14.526533 containerd[1731]: time="2025-02-13T19:58:14.526479519Z" level=info msg="CreateContainer within sandbox \"84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:58:14.559572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139815699.mount: Deactivated successfully. Feb 13 19:58:14.565071 containerd[1731]: time="2025-02-13T19:58:14.565027063Z" level=info msg="CreateContainer within sandbox \"84951c4400732a6affefe91a0f8bd4556596dca8a4331b8139ffcc6b83bd72ae\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5e6c38555f3d756a91fc6b9a3341fd5b21d57d15671d9f8c5122e57aff771bec\"" Feb 13 19:58:14.565736 containerd[1731]: time="2025-02-13T19:58:14.565704269Z" level=info msg="StartContainer for \"5e6c38555f3d756a91fc6b9a3341fd5b21d57d15671d9f8c5122e57aff771bec\"" Feb 13 19:58:14.601552 systemd[1]: Started cri-containerd-5e6c38555f3d756a91fc6b9a3341fd5b21d57d15671d9f8c5122e57aff771bec.scope - libcontainer container 5e6c38555f3d756a91fc6b9a3341fd5b21d57d15671d9f8c5122e57aff771bec. Feb 13 19:58:14.631894 containerd[1731]: time="2025-02-13T19:58:14.631743257Z" level=info msg="StartContainer for \"5e6c38555f3d756a91fc6b9a3341fd5b21d57d15671d9f8c5122e57aff771bec\" returns successfully" Feb 13 19:58:15.027513 kubelet[2433]: E0213 19:58:15.027453 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:15.368839 kubelet[2433]: I0213 19:58:15.368661 2433 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-pxg7l" podStartSLOduration=7.5839936340000005 podStartE2EDuration="13.368641526s" podCreationTimestamp="2025-02-13 19:58:02 +0000 UTC" firstStartedPulling="2025-02-13 19:58:08.740263813 +0000 UTC m=+22.483033221" lastFinishedPulling="2025-02-13 19:58:14.524911605 +0000 UTC m=+28.267681113" observedRunningTime="2025-02-13 19:58:15.368499525 +0000 UTC m=+29.111268933" watchObservedRunningTime="2025-02-13 19:58:15.368641526 +0000 UTC m=+29.111411034" Feb 13 19:58:15.940874 containerd[1731]: time="2025-02-13T19:58:15.940811627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:15.943889 containerd[1731]: time="2025-02-13T19:58:15.943808854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:58:15.947918 containerd[1731]: time="2025-02-13T19:58:15.947849890Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:15.953587 containerd[1731]: time="2025-02-13T19:58:15.953527340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:15.954630 containerd[1731]: time="2025-02-13T19:58:15.954281647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.42911324s" Feb 13 19:58:15.954630 containerd[1731]: time="2025-02-13T19:58:15.954326848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:58:15.957033 containerd[1731]: time="2025-02-13T19:58:15.957001371Z" level=info msg="CreateContainer within sandbox \"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:58:16.004828 containerd[1731]: time="2025-02-13T19:58:16.004734397Z" level=info msg="CreateContainer within sandbox \"66575aa1320d4c49305e11081d85187f2d2aa2fb42a58c9b34b4bac0f441cb2b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b031c953e6d2ba7be0d7aa9cafbd32f15e4951cc382cbc5fd88ab84dfa6a4b43\"" Feb 13 19:58:16.005625 containerd[1731]: time="2025-02-13T19:58:16.005409803Z" level=info msg="StartContainer for \"b031c953e6d2ba7be0d7aa9cafbd32f15e4951cc382cbc5fd88ab84dfa6a4b43\"" Feb 13 19:58:16.029427 kubelet[2433]: E0213 19:58:16.028483 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:16.043577 systemd[1]: Started cri-containerd-b031c953e6d2ba7be0d7aa9cafbd32f15e4951cc382cbc5fd88ab84dfa6a4b43.scope - libcontainer container b031c953e6d2ba7be0d7aa9cafbd32f15e4951cc382cbc5fd88ab84dfa6a4b43. Feb 13 19:58:16.078934 containerd[1731]: time="2025-02-13T19:58:16.078668656Z" level=info msg="StartContainer for \"b031c953e6d2ba7be0d7aa9cafbd32f15e4951cc382cbc5fd88ab84dfa6a4b43\" returns successfully" Feb 13 19:58:16.236306 kubelet[2433]: I0213 19:58:16.235740 2433 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:58:16.236306 kubelet[2433]: I0213 19:58:16.235797 2433 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:58:16.379026 kubelet[2433]: I0213 19:58:16.378951 2433 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rdxqp" podStartSLOduration=22.057104487 podStartE2EDuration="29.378927433s" podCreationTimestamp="2025-02-13 19:57:47 +0000 UTC" firstStartedPulling="2025-02-13 19:58:08.633642812 +0000 UTC m=+22.376412220" lastFinishedPulling="2025-02-13 19:58:15.955465458 +0000 UTC m=+29.698235166" observedRunningTime="2025-02-13 19:58:16.378820032 +0000 UTC m=+30.121589540" watchObservedRunningTime="2025-02-13 19:58:16.378927433 +0000 UTC m=+30.121696941" Feb 13 19:58:17.029026 kubelet[2433]: E0213 19:58:17.028954 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:18.029747 kubelet[2433]: E0213 19:58:18.029690 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:19.030175 kubelet[2433]: E0213 19:58:19.030102 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:20.005043 systemd[1]: Created slice kubepods-besteffort-podc1b27d97_2fc3_4328_a960_0ba917eabd94.slice - libcontainer container kubepods-besteffort-podc1b27d97_2fc3_4328_a960_0ba917eabd94.slice. Feb 13 19:58:20.030628 kubelet[2433]: E0213 19:58:20.030574 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:20.078240 kubelet[2433]: I0213 19:58:20.078169 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvhg\" (UniqueName: \"kubernetes.io/projected/c1b27d97-2fc3-4328-a960-0ba917eabd94-kube-api-access-ngvhg\") pod \"nfs-server-provisioner-0\" (UID: \"c1b27d97-2fc3-4328-a960-0ba917eabd94\") " pod="default/nfs-server-provisioner-0" Feb 13 19:58:20.078240 kubelet[2433]: I0213 19:58:20.078245 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c1b27d97-2fc3-4328-a960-0ba917eabd94-data\") pod \"nfs-server-provisioner-0\" (UID: \"c1b27d97-2fc3-4328-a960-0ba917eabd94\") " pod="default/nfs-server-provisioner-0" Feb 13 19:58:20.308970 containerd[1731]: time="2025-02-13T19:58:20.308799066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c1b27d97-2fc3-4328-a960-0ba917eabd94,Namespace:default,Attempt:0,}" Feb 13 19:58:20.448482 systemd-networkd[1583]: cali60e51b789ff: Link UP Feb 13 19:58:20.449305 systemd-networkd[1583]: cali60e51b789ff: Gained carrier Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.381 [INFO][3958] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.15-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default c1b27d97-2fc3-4328-a960-0ba917eabd94 1460 0 2025-02-13 19:58:19 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.15 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.382 [INFO][3958] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.408 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" HandleID="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Workload="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.417 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" HandleID="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Workload="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003090b0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.15", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:58:20.408142351 +0000 UTC"}, Hostname:"10.200.8.15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.417 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.417 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.417 [INFO][3968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.15' Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.419 [INFO][3968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.422 [INFO][3968] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.426 [INFO][3968] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.427 [INFO][3968] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.429 [INFO][3968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.429 [INFO][3968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.431 [INFO][3968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8 Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.435 [INFO][3968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.442 [INFO][3968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.442 [INFO][3968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" host="10.200.8.15" Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.442 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:58:20.461255 containerd[1731]: 2025-02-13 19:58:20.442 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" HandleID="k8s-pod-network.693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Workload="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.463797 containerd[1731]: 2025-02-13 19:58:20.444 [INFO][3958] cni-plugin/k8s.go 386: Populated endpoint ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c1b27d97-2fc3-4328-a960-0ba917eabd94", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:20.463797 containerd[1731]: 2025-02-13 19:58:20.444 [INFO][3958] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.67/32] ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.463797 containerd[1731]: 2025-02-13 19:58:20.444 [INFO][3958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.463797 containerd[1731]: 2025-02-13 19:58:20.447 [INFO][3958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.464746 containerd[1731]: 2025-02-13 19:58:20.448 [INFO][3958] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c1b27d97-2fc3-4328-a960-0ba917eabd94", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"de:98:24:60:ac:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:20.464746 containerd[1731]: 2025-02-13 19:58:20.459 [INFO][3958] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.15-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:58:20.493779 containerd[1731]: time="2025-02-13T19:58:20.493057308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:20.493779 containerd[1731]: time="2025-02-13T19:58:20.493725414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:20.493779 containerd[1731]: time="2025-02-13T19:58:20.493742014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:20.494064 containerd[1731]: time="2025-02-13T19:58:20.493839215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:20.524575 systemd[1]: Started cri-containerd-693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8.scope - libcontainer container 693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8. Feb 13 19:58:20.567094 containerd[1731]: time="2025-02-13T19:58:20.566893066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c1b27d97-2fc3-4328-a960-0ba917eabd94,Namespace:default,Attempt:0,} returns sandbox id \"693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8\"" Feb 13 19:58:20.570057 containerd[1731]: time="2025-02-13T19:58:20.569865493Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:58:21.030818 kubelet[2433]: E0213 19:58:21.030764 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:22.031999 kubelet[2433]: E0213 19:58:22.031922 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:22.416773 systemd-networkd[1583]: cali60e51b789ff: Gained IPv6LL Feb 13 19:58:23.033293 kubelet[2433]: E0213 19:58:23.033149 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:23.060111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001428077.mount: Deactivated successfully. Feb 13 19:58:24.033831 kubelet[2433]: E0213 19:58:24.033770 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:25.035510 kubelet[2433]: E0213 19:58:25.035450 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:26.035988 kubelet[2433]: E0213 19:58:26.035923 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:26.464640 containerd[1731]: time="2025-02-13T19:58:26.464567970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:26.467115 containerd[1731]: time="2025-02-13T19:58:26.467042289Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 19:58:26.470459 containerd[1731]: time="2025-02-13T19:58:26.470383015Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:26.479801 containerd[1731]: time="2025-02-13T19:58:26.479590386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:26.482100 containerd[1731]: time="2025-02-13T19:58:26.482057605Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.912149712s" Feb 13 19:58:26.482100 containerd[1731]: time="2025-02-13T19:58:26.482095606Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:58:26.484676 containerd[1731]: time="2025-02-13T19:58:26.484638425Z" level=info msg="CreateContainer within sandbox \"693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:58:26.519202 containerd[1731]: time="2025-02-13T19:58:26.519151493Z" level=info msg="CreateContainer within sandbox \"693d8f85230b983f29ab6d421e641597615af6d279c0bf66c483c7c52cc30cd8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e\"" Feb 13 19:58:26.520013 containerd[1731]: time="2025-02-13T19:58:26.519927699Z" level=info msg="StartContainer for \"5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e\"" Feb 13 19:58:26.553746 systemd[1]: run-containerd-runc-k8s.io-5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e-runc.lL8bbF.mount: Deactivated successfully. Feb 13 19:58:26.560560 systemd[1]: Started cri-containerd-5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e.scope - libcontainer container 5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e. Feb 13 19:58:26.593038 containerd[1731]: time="2025-02-13T19:58:26.592817564Z" level=info msg="StartContainer for \"5b780a9bebf0c2fc24fb4d353b55ba7a70735319806d92c1f2da96d827d1ba8e\" returns successfully" Feb 13 19:58:27.005744 kubelet[2433]: E0213 19:58:27.005672 2433 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:27.036561 kubelet[2433]: E0213 19:58:27.036492 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:27.425419 kubelet[2433]: I0213 19:58:27.425221 2433 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.51163879 podStartE2EDuration="8.425199314s" podCreationTimestamp="2025-02-13 19:58:19 +0000 UTC" firstStartedPulling="2025-02-13 19:58:20.569480489 +0000 UTC m=+34.312249997" lastFinishedPulling="2025-02-13 19:58:26.483041013 +0000 UTC m=+40.225810521" observedRunningTime="2025-02-13 19:58:27.425172714 +0000 UTC m=+41.167942122" watchObservedRunningTime="2025-02-13 19:58:27.425199314 +0000 UTC m=+41.167968722" Feb 13 19:58:28.037285 kubelet[2433]: E0213 19:58:28.037216 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:29.038593 kubelet[2433]: E0213 19:58:29.038474 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:30.038909 kubelet[2433]: E0213 19:58:30.038832 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:31.039346 kubelet[2433]: E0213 19:58:31.039271 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:32.039665 kubelet[2433]: E0213 19:58:32.039598 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:33.040440 kubelet[2433]: E0213 19:58:33.040352 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:34.041115 kubelet[2433]: E0213 19:58:34.041045 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:35.042048 kubelet[2433]: E0213 19:58:35.041982 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:36.042661 kubelet[2433]: E0213 19:58:36.042573 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:37.042929 kubelet[2433]: E0213 19:58:37.042856 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:38.043726 kubelet[2433]: E0213 19:58:38.043658 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:39.044582 kubelet[2433]: E0213 19:58:39.044510 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:40.045014 kubelet[2433]: E0213 19:58:40.044943 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:40.786316 systemd[1]: run-containerd-runc-k8s.io-82ff1c66995e0fa2653d4f2502d8902e1d9f4e09dcfb929e4240ff68da4052fb-runc.75ZlTw.mount: Deactivated successfully. Feb 13 19:58:41.046076 kubelet[2433]: E0213 19:58:41.045894 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:42.046340 kubelet[2433]: E0213 19:58:42.046267 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:43.047234 kubelet[2433]: E0213 19:58:43.047168 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:44.048031 kubelet[2433]: E0213 19:58:44.047961 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:45.048743 kubelet[2433]: E0213 19:58:45.048672 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:46.049813 kubelet[2433]: E0213 19:58:46.049743 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:47.005706 kubelet[2433]: E0213 19:58:47.005639 2433 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:47.050461 kubelet[2433]: E0213 19:58:47.050408 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:47.058562 containerd[1731]: time="2025-02-13T19:58:47.058519144Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:47.059035 containerd[1731]: time="2025-02-13T19:58:47.058679045Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:47.059035 containerd[1731]: time="2025-02-13T19:58:47.058747746Z" level=info msg="StopPodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:47.059315 containerd[1731]: time="2025-02-13T19:58:47.059271150Z" level=info msg="RemovePodSandbox for \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:47.059439 containerd[1731]: time="2025-02-13T19:58:47.059315950Z" level=info msg="Forcibly stopping sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\"" Feb 13 19:58:47.059513 containerd[1731]: time="2025-02-13T19:58:47.059416051Z" level=info msg="TearDown network for sandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" successfully" Feb 13 19:58:47.065040 containerd[1731]: time="2025-02-13T19:58:47.064995396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.065162 containerd[1731]: time="2025-02-13T19:58:47.065069997Z" level=info msg="RemovePodSandbox \"56475913153ef4d3f68f9b5f5ae31a0fef2353b5ef3f9748a7ccaba4140c0688\" returns successfully" Feb 13 19:58:47.065554 containerd[1731]: time="2025-02-13T19:58:47.065517100Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:47.065646 containerd[1731]: time="2025-02-13T19:58:47.065627901Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:47.065690 containerd[1731]: time="2025-02-13T19:58:47.065645101Z" level=info msg="StopPodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:47.065987 containerd[1731]: time="2025-02-13T19:58:47.065952804Z" level=info msg="RemovePodSandbox for \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:47.066073 containerd[1731]: time="2025-02-13T19:58:47.065989504Z" level=info msg="Forcibly stopping sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\"" Feb 13 19:58:47.066118 containerd[1731]: time="2025-02-13T19:58:47.066066905Z" level=info msg="TearDown network for sandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" successfully" Feb 13 19:58:47.071099 containerd[1731]: time="2025-02-13T19:58:47.071061245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.071186 containerd[1731]: time="2025-02-13T19:58:47.071122246Z" level=info msg="RemovePodSandbox \"9b5180f175a4b89d33e97e74118c5914953d07599d2568dab267c9954cf6e20b\" returns successfully" Feb 13 19:58:47.071618 containerd[1731]: time="2025-02-13T19:58:47.071521649Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:47.071703 containerd[1731]: time="2025-02-13T19:58:47.071619050Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:47.071703 containerd[1731]: time="2025-02-13T19:58:47.071634550Z" level=info msg="StopPodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:47.072001 containerd[1731]: time="2025-02-13T19:58:47.071968352Z" level=info msg="RemovePodSandbox for \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:47.072001 containerd[1731]: time="2025-02-13T19:58:47.071999553Z" level=info msg="Forcibly stopping sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\"" Feb 13 19:58:47.072130 containerd[1731]: time="2025-02-13T19:58:47.072079553Z" level=info msg="TearDown network for sandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" successfully" Feb 13 19:58:47.082935 containerd[1731]: time="2025-02-13T19:58:47.082897040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.083024 containerd[1731]: time="2025-02-13T19:58:47.082950841Z" level=info msg="RemovePodSandbox \"277410b863b12151755f2457970992171943d05707edaa458cf4df2e911478d6\" returns successfully" Feb 13 19:58:47.083318 containerd[1731]: time="2025-02-13T19:58:47.083294044Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:47.083441 containerd[1731]: time="2025-02-13T19:58:47.083419945Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:47.083495 containerd[1731]: time="2025-02-13T19:58:47.083437745Z" level=info msg="StopPodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:47.083787 containerd[1731]: time="2025-02-13T19:58:47.083740447Z" level=info msg="RemovePodSandbox for \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:47.083866 containerd[1731]: time="2025-02-13T19:58:47.083791148Z" level=info msg="Forcibly stopping sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\"" Feb 13 19:58:47.083933 containerd[1731]: time="2025-02-13T19:58:47.083883348Z" level=info msg="TearDown network for sandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" successfully" Feb 13 19:58:47.091290 containerd[1731]: time="2025-02-13T19:58:47.090945605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.091290 containerd[1731]: time="2025-02-13T19:58:47.090998206Z" level=info msg="RemovePodSandbox \"55dd0d0bbedffea4f6ffe3650045a5edb6dd8a2c0a8ae9548a52cdee9740e2b3\" returns successfully" Feb 13 19:58:47.091873 containerd[1731]: time="2025-02-13T19:58:47.091664111Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:47.091873 containerd[1731]: time="2025-02-13T19:58:47.091770112Z" level=info msg="TearDown network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" successfully" Feb 13 19:58:47.091873 containerd[1731]: time="2025-02-13T19:58:47.091785712Z" level=info msg="StopPodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" returns successfully" Feb 13 19:58:47.092341 containerd[1731]: time="2025-02-13T19:58:47.092258916Z" level=info msg="RemovePodSandbox for \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:47.092341 containerd[1731]: time="2025-02-13T19:58:47.092288816Z" level=info msg="Forcibly stopping sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\"" Feb 13 19:58:47.092480 containerd[1731]: time="2025-02-13T19:58:47.092376317Z" level=info msg="TearDown network for sandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" successfully" Feb 13 19:58:47.099770 containerd[1731]: time="2025-02-13T19:58:47.099740976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.099860 containerd[1731]: time="2025-02-13T19:58:47.099792876Z" level=info msg="RemovePodSandbox \"6308c41550057b94378215b322bb7c3214dee5b14783095ea08e8e1bfcbc4776\" returns successfully" Feb 13 19:58:47.100182 containerd[1731]: time="2025-02-13T19:58:47.100085679Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" Feb 13 19:58:47.100272 containerd[1731]: time="2025-02-13T19:58:47.100184280Z" level=info msg="TearDown network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" successfully" Feb 13 19:58:47.100272 containerd[1731]: time="2025-02-13T19:58:47.100199480Z" level=info msg="StopPodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" returns successfully" Feb 13 19:58:47.100626 containerd[1731]: time="2025-02-13T19:58:47.100595583Z" level=info msg="RemovePodSandbox for \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" Feb 13 19:58:47.100735 containerd[1731]: time="2025-02-13T19:58:47.100625283Z" level=info msg="Forcibly stopping sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\"" Feb 13 19:58:47.100735 containerd[1731]: time="2025-02-13T19:58:47.100703384Z" level=info msg="TearDown network for sandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" successfully" Feb 13 19:58:47.110221 containerd[1731]: time="2025-02-13T19:58:47.110193660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.110318 containerd[1731]: time="2025-02-13T19:58:47.110238761Z" level=info msg="RemovePodSandbox \"f384ffb38f3889f9124b3ae39a60e11b83da3372b6560211f173f031622414ed\" returns successfully" Feb 13 19:58:47.110586 containerd[1731]: time="2025-02-13T19:58:47.110547963Z" level=info msg="StopPodSandbox for \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\"" Feb 13 19:58:47.110681 containerd[1731]: time="2025-02-13T19:58:47.110661364Z" level=info msg="TearDown network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" successfully" Feb 13 19:58:47.110724 containerd[1731]: time="2025-02-13T19:58:47.110680164Z" level=info msg="StopPodSandbox for \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" returns successfully" Feb 13 19:58:47.110975 containerd[1731]: time="2025-02-13T19:58:47.110936066Z" level=info msg="RemovePodSandbox for \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\"" Feb 13 19:58:47.111063 containerd[1731]: time="2025-02-13T19:58:47.111030167Z" level=info msg="Forcibly stopping sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\"" Feb 13 19:58:47.111163 containerd[1731]: time="2025-02-13T19:58:47.111117068Z" level=info msg="TearDown network for sandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" successfully" Feb 13 19:58:47.118313 containerd[1731]: time="2025-02-13T19:58:47.118286725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.118440 containerd[1731]: time="2025-02-13T19:58:47.118329426Z" level=info msg="RemovePodSandbox \"a0fdc29e74a418602fe08af2c862c3645c3f50b7f1a8f0af8fe22189edfa1708\" returns successfully" Feb 13 19:58:47.118676 containerd[1731]: time="2025-02-13T19:58:47.118654528Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:47.118852 containerd[1731]: time="2025-02-13T19:58:47.118823330Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:47.118852 containerd[1731]: time="2025-02-13T19:58:47.118841530Z" level=info msg="StopPodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:47.119188 containerd[1731]: time="2025-02-13T19:58:47.119111732Z" level=info msg="RemovePodSandbox for \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:47.119188 containerd[1731]: time="2025-02-13T19:58:47.119141532Z" level=info msg="Forcibly stopping sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\"" Feb 13 19:58:47.119322 containerd[1731]: time="2025-02-13T19:58:47.119218233Z" level=info msg="TearDown network for sandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" successfully" Feb 13 19:58:47.125560 containerd[1731]: time="2025-02-13T19:58:47.125531484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.125650 containerd[1731]: time="2025-02-13T19:58:47.125576884Z" level=info msg="RemovePodSandbox \"2193f7a5e2f8135d38e53bc172a674d73a9998e2e565b5eec0d83d5d9c64beec\" returns successfully" Feb 13 19:58:47.125889 containerd[1731]: time="2025-02-13T19:58:47.125863786Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:47.126026 containerd[1731]: time="2025-02-13T19:58:47.125963487Z" level=info msg="TearDown network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" successfully" Feb 13 19:58:47.126026 containerd[1731]: time="2025-02-13T19:58:47.125984987Z" level=info msg="StopPodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" returns successfully" Feb 13 19:58:47.126298 containerd[1731]: time="2025-02-13T19:58:47.126271190Z" level=info msg="RemovePodSandbox for \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:47.126360 containerd[1731]: time="2025-02-13T19:58:47.126298390Z" level=info msg="Forcibly stopping sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\"" Feb 13 19:58:47.126426 containerd[1731]: time="2025-02-13T19:58:47.126370191Z" level=info msg="TearDown network for sandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" successfully" Feb 13 19:58:47.132515 containerd[1731]: time="2025-02-13T19:58:47.132487840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.132592 containerd[1731]: time="2025-02-13T19:58:47.132531240Z" level=info msg="RemovePodSandbox \"ee2bacb209bd13da3fbe357f2932c244f211ebf429d202bcee8d7337e7139347\" returns successfully" Feb 13 19:58:47.132894 containerd[1731]: time="2025-02-13T19:58:47.132870243Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" Feb 13 19:58:47.132983 containerd[1731]: time="2025-02-13T19:58:47.132964944Z" level=info msg="TearDown network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" successfully" Feb 13 19:58:47.133042 containerd[1731]: time="2025-02-13T19:58:47.132980944Z" level=info msg="StopPodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" returns successfully" Feb 13 19:58:47.133296 containerd[1731]: time="2025-02-13T19:58:47.133271246Z" level=info msg="RemovePodSandbox for \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" Feb 13 19:58:47.133403 containerd[1731]: time="2025-02-13T19:58:47.133300346Z" level=info msg="Forcibly stopping sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\"" Feb 13 19:58:47.133457 containerd[1731]: time="2025-02-13T19:58:47.133372547Z" level=info msg="TearDown network for sandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" successfully" Feb 13 19:58:47.138764 containerd[1731]: time="2025-02-13T19:58:47.138737890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.138859 containerd[1731]: time="2025-02-13T19:58:47.138778890Z" level=info msg="RemovePodSandbox \"73fae93812888b79f558898f7982c8a236bf5be64d9fdd125a56957c6e491aa3\" returns successfully" Feb 13 19:58:47.139135 containerd[1731]: time="2025-02-13T19:58:47.139093193Z" level=info msg="StopPodSandbox for \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\"" Feb 13 19:58:47.139236 containerd[1731]: time="2025-02-13T19:58:47.139212294Z" level=info msg="TearDown network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" successfully" Feb 13 19:58:47.139236 containerd[1731]: time="2025-02-13T19:58:47.139231594Z" level=info msg="StopPodSandbox for \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" returns successfully" Feb 13 19:58:47.139566 containerd[1731]: time="2025-02-13T19:58:47.139541297Z" level=info msg="RemovePodSandbox for \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\"" Feb 13 19:58:47.139640 containerd[1731]: time="2025-02-13T19:58:47.139575197Z" level=info msg="Forcibly stopping sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\"" Feb 13 19:58:47.139692 containerd[1731]: time="2025-02-13T19:58:47.139648297Z" level=info msg="TearDown network for sandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" successfully" Feb 13 19:58:47.145956 containerd[1731]: time="2025-02-13T19:58:47.145930148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:58:47.146051 containerd[1731]: time="2025-02-13T19:58:47.145971148Z" level=info msg="RemovePodSandbox \"71f8d97f0e25eccfdd41c32f924426710eb19b830c39f98c5e387b6cbf245480\" returns successfully" Feb 13 19:58:48.051215 kubelet[2433]: E0213 19:58:48.051141 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:49.052409 kubelet[2433]: E0213 19:58:49.052337 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:50.052697 kubelet[2433]: E0213 19:58:50.052636 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:51.053452 kubelet[2433]: E0213 19:58:51.053351 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:51.384574 systemd[1]: Created slice kubepods-besteffort-pod1e066d73_b6bc_4dfb_b4a5_a0155e8a0bb7.slice - libcontainer container kubepods-besteffort-pod1e066d73_b6bc_4dfb_b4a5_a0155e8a0bb7.slice. Feb 13 19:58:51.475969 kubelet[2433]: I0213 19:58:51.475860 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-31dc4a3c-5b8b-480b-b54d-3c4169c2ad3a\" (UniqueName: \"kubernetes.io/nfs/1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7-pvc-31dc4a3c-5b8b-480b-b54d-3c4169c2ad3a\") pod \"test-pod-1\" (UID: \"1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7\") " pod="default/test-pod-1" Feb 13 19:58:51.475969 kubelet[2433]: I0213 19:58:51.475909 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws97j\" (UniqueName: \"kubernetes.io/projected/1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7-kube-api-access-ws97j\") pod \"test-pod-1\" (UID: \"1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7\") " pod="default/test-pod-1" Feb 13 19:58:51.627425 kernel: FS-Cache: Loaded Feb 13 19:58:51.698153 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:58:51.698334 kernel: RPC: Registered udp transport module. Feb 13 19:58:51.698360 kernel: RPC: Registered tcp transport module. Feb 13 19:58:51.701765 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:58:51.701823 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:58:51.945201 kernel: NFS: Registering the id_resolver key type Feb 13 19:58:51.945478 kernel: Key type id_resolver registered Feb 13 19:58:51.945503 kernel: Key type id_legacy registered Feb 13 19:58:52.030727 nfsidmap[4194]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-5a2e75f9ad' Feb 13 19:58:52.048012 nfsidmap[4195]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-5a2e75f9ad' Feb 13 19:58:52.054630 kubelet[2433]: E0213 19:58:52.054567 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:52.289657 containerd[1731]: time="2025-02-13T19:58:52.289481176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7,Namespace:default,Attempt:0,}" Feb 13 19:58:52.426363 systemd-networkd[1583]: cali5ec59c6bf6e: Link UP Feb 13 19:58:52.427766 systemd-networkd[1583]: cali5ec59c6bf6e: Gained carrier Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.357 [INFO][4197] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.15-k8s-test--pod--1-eth0 default 1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7 1568 0 2025-02-13 19:58:21 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.15 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.357 [INFO][4197] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.383 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" HandleID="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Workload="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.393 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" HandleID="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Workload="10.200.8.15-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292bf0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.15", "pod":"test-pod-1", "timestamp":"2025-02-13 19:58:52.383924437 +0000 UTC"}, Hostname:"10.200.8.15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.393 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.393 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.393 [INFO][4207] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.15' Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.395 [INFO][4207] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.398 [INFO][4207] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.403 [INFO][4207] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.405 [INFO][4207] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.407 [INFO][4207] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.407 [INFO][4207] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.409 [INFO][4207] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6 Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.413 [INFO][4207] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.420 [INFO][4207] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.420 [INFO][4207] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" host="10.200.8.15" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.420 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.421 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" HandleID="k8s-pod-network.083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Workload="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.440643 containerd[1731]: 2025-02-13 19:58:52.422 [INFO][4197] cni-plugin/k8s.go 386: Populated endpoint ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7", ResourceVersion:"1568", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:52.443009 containerd[1731]: 2025-02-13 19:58:52.422 [INFO][4197] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.68/32] ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.443009 containerd[1731]: 2025-02-13 19:58:52.422 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.443009 containerd[1731]: 2025-02-13 19:58:52.428 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.443009 containerd[1731]: 2025-02-13 19:58:52.429 [INFO][4197] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.15-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7", ResourceVersion:"1568", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.15", ContainerID:"083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"96:88:43:94:5e:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:58:52.443009 containerd[1731]: 2025-02-13 19:58:52.438 [INFO][4197] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.15-k8s-test--pod--1-eth0" Feb 13 19:58:52.482970 containerd[1731]: time="2025-02-13T19:58:52.482463131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:52.482970 containerd[1731]: time="2025-02-13T19:58:52.482652132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:52.482970 containerd[1731]: time="2025-02-13T19:58:52.482680832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:52.482970 containerd[1731]: time="2025-02-13T19:58:52.482785533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:52.506561 systemd[1]: Started cri-containerd-083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6.scope - libcontainer container 083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6. Feb 13 19:58:52.550169 containerd[1731]: time="2025-02-13T19:58:52.549961274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1e066d73-b6bc-4dfb-b4a5-a0155e8a0bb7,Namespace:default,Attempt:0,} returns sandbox id \"083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6\"" Feb 13 19:58:52.551885 containerd[1731]: time="2025-02-13T19:58:52.551838089Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:58:52.936555 containerd[1731]: time="2025-02-13T19:58:52.936374683Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:52.939043 containerd[1731]: time="2025-02-13T19:58:52.938979604Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:58:52.942069 containerd[1731]: time="2025-02-13T19:58:52.942023928Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 390.140438ms" Feb 13 19:58:52.942194 containerd[1731]: time="2025-02-13T19:58:52.942069329Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:58:52.945031 containerd[1731]: time="2025-02-13T19:58:52.944994252Z" level=info msg="CreateContainer within sandbox \"083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:58:52.976480 containerd[1731]: time="2025-02-13T19:58:52.976432404Z" level=info msg="CreateContainer within sandbox \"083ef8e8dd9d9431dfbde581f7f7274d17ca3cd122972c496b4e7ba7792fd2d6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cd99d1624c3b3384d85eb89d3c97a58fb41a29cda4bc4dbddf477dfe7f867ea7\"" Feb 13 19:58:52.977325 containerd[1731]: time="2025-02-13T19:58:52.977130809Z" level=info msg="StartContainer for \"cd99d1624c3b3384d85eb89d3c97a58fb41a29cda4bc4dbddf477dfe7f867ea7\"" Feb 13 19:58:53.012564 systemd[1]: Started cri-containerd-cd99d1624c3b3384d85eb89d3c97a58fb41a29cda4bc4dbddf477dfe7f867ea7.scope - libcontainer container cd99d1624c3b3384d85eb89d3c97a58fb41a29cda4bc4dbddf477dfe7f867ea7. Feb 13 19:58:53.043592 containerd[1731]: time="2025-02-13T19:58:53.043543740Z" level=info msg="StartContainer for \"cd99d1624c3b3384d85eb89d3c97a58fb41a29cda4bc4dbddf477dfe7f867ea7\" returns successfully" Feb 13 19:58:53.055266 kubelet[2433]: E0213 19:58:53.055219 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:53.903694 systemd-networkd[1583]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:58:54.055769 kubelet[2433]: E0213 19:58:54.055704 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:55.056756 kubelet[2433]: E0213 19:58:55.056621 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:56.057221 kubelet[2433]: E0213 19:58:56.057149 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:57.058371 kubelet[2433]: E0213 19:58:57.058298 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:58.059449 kubelet[2433]: E0213 19:58:58.059349 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:59.059923 kubelet[2433]: E0213 19:58:59.059848 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:00.060462 kubelet[2433]: E0213 19:59:00.060365 2433 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"