Feb 13 15:39:05.064574 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:39:05.064610 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.064624 kernel: BIOS-provided physical RAM map: Feb 13 15:39:05.064635 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:39:05.064644 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:39:05.064654 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:39:05.064666 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:39:05.064680 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:39:05.064690 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:39:05.064701 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:39:05.064783 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:39:05.064795 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:39:05.064806 kernel: NX (Execute Disable) protection: active Feb 13 15:39:05.064817 kernel: APIC: Static calls initialized Feb 13 15:39:05.064833 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:39:05.064846 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 RNG=0x3ffd1018 Feb 13 15:39:05.064858 kernel: random: crng init done Feb 13 15:39:05.064869 kernel: secureboot: Secure boot disabled Feb 13 15:39:05.064881 kernel: SMBIOS 3.1.0 present. Feb 13 15:39:05.064893 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:39:05.064904 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:39:05.064916 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:39:05.064932 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:39:05.064944 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:39:05.064959 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:39:05.064970 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:39:05.064982 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:39:05.064997 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:39:05.065010 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:39:05.065022 kernel: tsc: Detected 2593.905 MHz processor Feb 13 15:39:05.065034 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:39:05.065046 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:39:05.065058 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:39:05.065074 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:39:05.065086 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:39:05.065099 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:39:05.065111 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:39:05.065123 kernel: Using GB pages for direct mapping Feb 13 15:39:05.065135 kernel: ACPI: Early table checksum verification disabled Feb 13 15:39:05.065148 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:39:05.065165 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065184 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065203 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:39:05.065229 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:39:05.065251 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065265 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065279 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065296 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065309 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065323 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065337 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065351 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:39:05.065364 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:39:05.065378 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:39:05.065392 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:39:05.065405 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:39:05.065421 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:39:05.065435 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:39:05.065448 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:39:05.065462 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:39:05.065475 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:39:05.065489 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:39:05.065503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:39:05.065516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:39:05.065530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:39:05.065546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:39:05.065560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:39:05.065573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:39:05.065587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:39:05.065601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:39:05.065615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:39:05.065628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:39:05.065642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:39:05.065658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:39:05.065672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:39:05.065686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:39:05.065699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:39:05.065737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:39:05.065751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:39:05.065765 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:39:05.065779 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:39:05.065793 kernel: Zone ranges: Feb 13 15:39:05.065809 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:39:05.065823 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:39:05.065836 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:39:05.065850 kernel: Movable zone start for each node Feb 13 15:39:05.065864 kernel: Early memory node ranges Feb 13 15:39:05.065877 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:39:05.065891 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:39:05.065905 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:39:05.065919 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:39:05.065935 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:39:05.065949 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:39:05.065962 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:39:05.065976 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:39:05.065989 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:39:05.066003 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:39:05.066017 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:39:05.066030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:39:05.066044 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:39:05.066060 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:39:05.066074 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:39:05.066087 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:39:05.066101 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:39:05.066116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:39:05.066130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:39:05.066144 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:39:05.066157 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:39:05.066171 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:39:05.066186 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:39:05.066200 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:39:05.066216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.066230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:39:05.066243 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:39:05.066257 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:39:05.066271 kernel: Fallback order for Node 0: 0 Feb 13 15:39:05.066284 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:39:05.066301 kernel: Policy zone: Normal Feb 13 15:39:05.066324 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:39:05.066339 kernel: software IO TLB: area num 2. Feb 13 15:39:05.066356 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Feb 13 15:39:05.066371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:39:05.066385 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:39:05.066400 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:39:05.066414 kernel: Dynamic Preempt: voluntary Feb 13 15:39:05.066429 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:39:05.066445 kernel: rcu: RCU event tracing is enabled. Feb 13 15:39:05.066460 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:39:05.066477 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:39:05.066492 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:39:05.066507 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:39:05.066522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:39:05.066536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:39:05.066551 kernel: Using NULL legacy PIC Feb 13 15:39:05.066568 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:39:05.066582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:39:05.066597 kernel: Console: colour dummy device 80x25 Feb 13 15:39:05.066611 kernel: printk: console [tty1] enabled Feb 13 15:39:05.066626 kernel: printk: console [ttyS0] enabled Feb 13 15:39:05.066640 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:39:05.066655 kernel: ACPI: Core revision 20230628 Feb 13 15:39:05.066669 kernel: Failed to register legacy timer interrupt Feb 13 15:39:05.066684 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:39:05.066701 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:39:05.066724 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:39:05.066739 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:39:05.066754 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:39:05.066769 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:39:05.066784 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:39:05.066798 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:39:05.066813 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:39:05.066828 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 13 15:39:05.066845 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:39:05.066858 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:39:05.066871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:39:05.066897 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:39:05.066925 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:39:05.066947 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:39:05.066959 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:39:05.066973 kernel: RETBleed: Vulnerable Feb 13 15:39:05.066986 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:39:05.067000 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:39:05.067017 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:39:05.067030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:39:05.067042 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:39:05.067055 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:39:05.067070 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:39:05.067085 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:39:05.067098 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:39:05.067110 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:39:05.067124 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:39:05.067136 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:39:05.067148 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:39:05.067172 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:39:05.067186 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:39:05.067198 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:39:05.067210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:39:05.067223 kernel: landlock: Up and running. Feb 13 15:39:05.067238 kernel: SELinux: Initializing. Feb 13 15:39:05.067249 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.067262 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.067276 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:39:05.067289 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067303 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067322 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067335 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:39:05.067350 kernel: signal: max sigframe size: 3632 Feb 13 15:39:05.067365 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:39:05.067381 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:39:05.067396 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:39:05.067411 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:39:05.067426 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:39:05.067440 kernel: .... node #0, CPUs: #1 Feb 13 15:39:05.067458 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:39:05.067474 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:39:05.067490 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:39:05.067504 kernel: smpboot: Max logical packages: 1 Feb 13 15:39:05.067520 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 15:39:05.067535 kernel: devtmpfs: initialized Feb 13 15:39:05.067550 kernel: x86/mm: Memory block size: 128MB Feb 13 15:39:05.067565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:39:05.067583 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:39:05.067598 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:39:05.067613 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:39:05.067628 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:39:05.067644 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:39:05.067659 kernel: audit: type=2000 audit(1739461144.027:1): state=initialized audit_enabled=0 res=1 Feb 13 15:39:05.067674 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:39:05.067687 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:39:05.067700 kernel: cpuidle: using governor menu Feb 13 15:39:05.067729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:39:05.067743 kernel: dca service started, version 1.12.1 Feb 13 15:39:05.067762 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:39:05.067776 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:39:05.067791 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:39:05.067805 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:39:05.067819 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:39:05.067833 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:39:05.067847 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:39:05.067866 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:39:05.067880 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:39:05.067893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:39:05.067907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:39:05.067921 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:39:05.067935 kernel: ACPI: Interpreter enabled Feb 13 15:39:05.067950 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:39:05.067964 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:39:05.067977 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:39:05.067995 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:39:05.068009 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:39:05.068023 kernel: iommu: Default domain type: Translated Feb 13 15:39:05.068037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:39:05.068051 kernel: efivars: Registered efivars operations Feb 13 15:39:05.068065 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:39:05.068079 kernel: PCI: System does not support PCI Feb 13 15:39:05.068092 kernel: vgaarb: loaded Feb 13 15:39:05.068107 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:39:05.068124 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:39:05.068138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:39:05.068152 kernel: pnp: PnP ACPI init Feb 13 15:39:05.068166 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:39:05.068180 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:39:05.068195 kernel: NET: Registered PF_INET protocol family Feb 13 15:39:05.068208 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:39:05.068223 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:39:05.068237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:39:05.068253 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:39:05.068268 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:39:05.068281 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:39:05.068296 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.068310 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.068324 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:39:05.068338 kernel: NET: Registered PF_XDP protocol family Feb 13 15:39:05.068352 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:39:05.068366 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:39:05.068383 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Feb 13 15:39:05.068397 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:39:05.068411 kernel: Initialise system trusted keyrings Feb 13 15:39:05.068425 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:39:05.068438 kernel: Key type asymmetric registered Feb 13 15:39:05.068453 kernel: Asymmetric key parser 'x509' registered Feb 13 15:39:05.068466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:39:05.068480 kernel: io scheduler mq-deadline registered Feb 13 15:39:05.068494 kernel: io scheduler kyber registered Feb 13 15:39:05.068510 kernel: io scheduler bfq registered Feb 13 15:39:05.068524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:39:05.068539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:39:05.068552 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:39:05.068566 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:39:05.068580 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:39:05.068799 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:39:05.068940 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:39:04 UTC (1739461144) Feb 13 15:39:05.069064 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:39:05.069083 kernel: intel_pstate: CPU model not supported Feb 13 15:39:05.069098 kernel: efifb: probing for efifb Feb 13 15:39:05.069112 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:39:05.069126 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:39:05.069140 kernel: efifb: scrolling: redraw Feb 13 15:39:05.069155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:39:05.069169 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:39:05.069187 kernel: fb0: EFI VGA frame buffer device Feb 13 15:39:05.069201 kernel: pstore: Using crash dump compression: deflate Feb 13 15:39:05.069215 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:39:05.069230 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:39:05.069244 kernel: Segment Routing with IPv6 Feb 13 15:39:05.069258 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:39:05.069272 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:39:05.069286 kernel: Key type dns_resolver registered Feb 13 15:39:05.069300 kernel: IPI shorthand broadcast: enabled Feb 13 15:39:05.069315 kernel: sched_clock: Marking stable (813040000, 42382700)->(1053276100, -197853400) Feb 13 15:39:05.069332 kernel: registered taskstats version 1 Feb 13 15:39:05.069347 kernel: Loading compiled-in X.509 certificates Feb 13 15:39:05.069361 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:39:05.069374 kernel: Key type .fscrypt registered Feb 13 15:39:05.069389 kernel: Key type fscrypt-provisioning registered Feb 13 15:39:05.069403 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:39:05.069417 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:39:05.069431 kernel: ima: No architecture policies found Feb 13 15:39:05.069447 kernel: clk: Disabling unused clocks Feb 13 15:39:05.069462 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:39:05.069476 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:39:05.069490 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:39:05.069505 kernel: Run /init as init process Feb 13 15:39:05.069519 kernel: with arguments: Feb 13 15:39:05.069532 kernel: /init Feb 13 15:39:05.069546 kernel: with environment: Feb 13 15:39:05.069559 kernel: HOME=/ Feb 13 15:39:05.069573 kernel: TERM=linux Feb 13 15:39:05.069589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:39:05.069606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:39:05.069623 systemd[1]: Detected virtualization microsoft. Feb 13 15:39:05.069638 systemd[1]: Detected architecture x86-64. Feb 13 15:39:05.069652 systemd[1]: Running in initrd. Feb 13 15:39:05.069667 systemd[1]: No hostname configured, using default hostname. Feb 13 15:39:05.069680 systemd[1]: Hostname set to . Feb 13 15:39:05.069699 systemd[1]: Initializing machine ID from random generator. Feb 13 15:39:05.069730 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:39:05.069744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:05.069757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:05.069770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:39:05.069786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:39:05.069801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:39:05.069816 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:39:05.069836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:39:05.069851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:39:05.069867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:05.069882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:05.069897 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:39:05.069910 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:39:05.069935 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:39:05.069952 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:39:05.069967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:39:05.069982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:39:05.070000 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:39:05.070014 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:39:05.070030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:05.070046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:05.070062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:05.070082 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:39:05.070098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:39:05.070114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:39:05.070131 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:39:05.070146 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:39:05.070162 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:39:05.070178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:39:05.070218 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:39:05.070257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:05.070274 systemd-journald[177]: Journal started Feb 13 15:39:05.070306 systemd-journald[177]: Runtime Journal (/run/log/journal/ebd10bee0ef849e99b2328bbbbcfdfd4) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:39:05.067197 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:39:05.085832 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:39:05.083373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:39:05.090988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:05.098994 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:39:05.110090 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:39:05.114770 kernel: Bridge firewalling registered Feb 13 15:39:05.114392 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:39:05.117948 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:39:05.125233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:39:05.129754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:05.134811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:05.142780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:39:05.146319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:05.158936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:05.162845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:05.163519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:39:05.182975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:05.191626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:05.200899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:39:05.206667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:05.218897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:39:05.234469 dracut-cmdline[216]: dracut-dracut-053 Feb 13 15:39:05.239287 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.267133 systemd-resolved[209]: Positive Trust Anchors: Feb 13 15:39:05.267155 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:39:05.267209 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:39:05.292023 systemd-resolved[209]: Defaulting to hostname 'linux'. Feb 13 15:39:05.295403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:39:05.298199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:05.325733 kernel: SCSI subsystem initialized Feb 13 15:39:05.335732 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:39:05.346736 kernel: iscsi: registered transport (tcp) Feb 13 15:39:05.369034 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:39:05.369156 kernel: QLogic iSCSI HBA Driver Feb 13 15:39:05.405230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:39:05.414914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:39:05.444759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:39:05.444864 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:39:05.447860 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:39:05.487736 kernel: raid6: avx512x4 gen() 18531 MB/s Feb 13 15:39:05.506724 kernel: raid6: avx512x2 gen() 18536 MB/s Feb 13 15:39:05.525719 kernel: raid6: avx512x1 gen() 18470 MB/s Feb 13 15:39:05.544723 kernel: raid6: avx2x4 gen() 18498 MB/s Feb 13 15:39:05.564723 kernel: raid6: avx2x2 gen() 18462 MB/s Feb 13 15:39:05.585176 kernel: raid6: avx2x1 gen() 13920 MB/s Feb 13 15:39:05.585235 kernel: raid6: using algorithm avx512x2 gen() 18536 MB/s Feb 13 15:39:05.606406 kernel: raid6: .... xor() 30460 MB/s, rmw enabled Feb 13 15:39:05.606446 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:39:05.628738 kernel: xor: automatically using best checksumming function avx Feb 13 15:39:05.774737 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:39:05.784776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:39:05.794947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:05.809018 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:39:05.813510 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:05.828869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:39:05.842782 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 15:39:05.871910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:39:05.880869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:39:05.921575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:05.934971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:39:05.968867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:39:05.976455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:39:05.983023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:05.988662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:39:05.998925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:39:06.017731 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:39:06.021855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:39:06.024699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:06.032966 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:06.039790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:06.039979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:06.046295 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.065096 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:39:06.065122 kernel: AES CTR mode by8 optimization enabled Feb 13 15:39:06.061186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.068733 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:39:06.075152 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:39:06.093404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:06.115297 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:39:06.093706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:06.103972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.124415 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:39:06.124452 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:39:06.124465 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:39:06.139375 kernel: PTP clock support registered Feb 13 15:39:06.151570 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:39:06.151645 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:39:06.155862 kernel: scsi host1: storvsc_host_t Feb 13 15:39:06.155952 kernel: scsi host0: storvsc_host_t Feb 13 15:39:06.164747 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:39:06.164813 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:39:06.173748 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:39:06.173794 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:39:06.173829 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:39:06.181526 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:39:06.181587 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:39:07.106205 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:39:07.106387 systemd-resolved[209]: Clock change detected. Flushing caches. Feb 13 15:39:07.128896 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:39:07.128951 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:39:07.128974 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:39:07.129604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:07.142127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:07.166852 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:39:07.169645 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:39:07.169671 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:39:07.180583 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:07.194723 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:39:07.199270 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:39:07.199417 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:39:07.199558 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:39:07.199680 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:39:07.199798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:07.199812 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:39:07.301173 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: VF slot 1 added Feb 13 15:39:07.309049 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:39:07.313926 kernel: hv_pci 117ae9b3-17ff-49b4-87bc-308f83f37065: PCI VMBus probing: Using version 0x10004 Feb 13 15:39:07.391979 kernel: hv_pci 117ae9b3-17ff-49b4-87bc-308f83f37065: PCI host bridge to bus 17ff:00 Feb 13 15:39:07.392121 kernel: pci_bus 17ff:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:39:07.392251 kernel: pci_bus 17ff:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:39:07.392368 kernel: pci 17ff:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:39:07.392496 kernel: pci 17ff:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:39:07.392611 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (443) Feb 13 15:39:07.392627 kernel: pci 17ff:00:02.0: enabling Extended Tags Feb 13 15:39:07.392746 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (460) Feb 13 15:39:07.392758 kernel: pci 17ff:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 17ff:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:39:07.392869 kernel: pci_bus 17ff:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:39:07.392991 kernel: pci 17ff:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:39:07.332268 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:39:07.388186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:39:07.407274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:39:07.414345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:39:07.430633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:39:07.452068 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:39:07.476947 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:07.648929 kernel: mlx5_core 17ff:00:02.0: enabling device (0000 -> 0002) Feb 13 15:39:07.892463 kernel: mlx5_core 17ff:00:02.0: firmware version: 14.30.5000 Feb 13 15:39:07.892714 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: VF registering: eth1 Feb 13 15:39:07.892877 kernel: mlx5_core 17ff:00:02.0 eth1: joined to eth0 Feb 13 15:39:07.893087 kernel: mlx5_core 17ff:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:39:07.898923 kernel: mlx5_core 17ff:00:02.0 enP6143s1: renamed from eth1 Feb 13 15:39:08.496925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:08.498286 disk-uuid[596]: The operation has completed successfully. Feb 13 15:39:08.584181 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:39:08.584299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:39:08.600081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:39:08.605688 sh[691]: Success Feb 13 15:39:08.624208 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:39:08.693980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:39:08.713048 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:39:08.716733 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:39:08.743917 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:39:08.743974 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:08.749089 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:39:08.749914 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:39:08.754881 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:39:08.813428 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:39:08.820618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:39:08.830056 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:39:08.836136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:39:08.853407 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:08.853473 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:08.855101 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:08.865160 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:08.875280 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:39:08.881074 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:08.888515 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:39:08.898082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:39:08.965111 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:39:08.974144 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:39:09.006403 systemd-networkd[875]: lo: Link UP Feb 13 15:39:09.006414 systemd-networkd[875]: lo: Gained carrier Feb 13 15:39:09.008600 systemd-networkd[875]: Enumeration completed Feb 13 15:39:09.008875 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:39:09.012840 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:09.012844 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:39:09.015296 systemd[1]: Reached target network.target - Network. Feb 13 15:39:09.075928 kernel: mlx5_core 17ff:00:02.0 enP6143s1: Link up Feb 13 15:39:09.115096 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: Data path switched to VF: enP6143s1 Feb 13 15:39:09.116696 systemd-networkd[875]: enP6143s1: Link UP Feb 13 15:39:09.116817 systemd-networkd[875]: eth0: Link UP Feb 13 15:39:09.121182 systemd-networkd[875]: eth0: Gained carrier Feb 13 15:39:09.121199 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:09.132378 systemd-networkd[875]: enP6143s1: Gained carrier Feb 13 15:39:09.139506 ignition[792]: Ignition 2.20.0 Feb 13 15:39:09.139518 ignition[792]: Stage: fetch-offline Feb 13 15:39:09.139559 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.139571 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.139816 ignition[792]: parsed url from cmdline: "" Feb 13 15:39:09.139822 ignition[792]: no config URL provided Feb 13 15:39:09.139830 ignition[792]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:39:09.139842 ignition[792]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:39:09.139848 ignition[792]: failed to fetch config: resource requires networking Feb 13 15:39:09.141102 ignition[792]: Ignition finished successfully Feb 13 15:39:09.156790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:39:09.158998 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:39:09.174069 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:39:09.185981 ignition[885]: Ignition 2.20.0 Feb 13 15:39:09.185992 ignition[885]: Stage: fetch Feb 13 15:39:09.186209 ignition[885]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.186222 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.186293 ignition[885]: parsed url from cmdline: "" Feb 13 15:39:09.186296 ignition[885]: no config URL provided Feb 13 15:39:09.186300 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:39:09.186306 ignition[885]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:39:09.187836 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:39:09.281149 ignition[885]: GET result: OK Feb 13 15:39:09.281255 ignition[885]: config has been read from IMDS userdata Feb 13 15:39:09.281279 ignition[885]: parsing config with SHA512: 57fcb41024fc28c1f6c968b6ef91f4a964e3b23c1697250f6b11a0341062a62eeffb135f9f7f27f8b43f175fe7819e2d6371a39f03b16d8ace7f4bb6a381439a Feb 13 15:39:09.286243 unknown[885]: fetched base config from "system" Feb 13 15:39:09.286953 unknown[885]: fetched base config from "system" Feb 13 15:39:09.287215 ignition[885]: fetch: fetch complete Feb 13 15:39:09.286959 unknown[885]: fetched user config from "azure" Feb 13 15:39:09.287220 ignition[885]: fetch: fetch passed Feb 13 15:39:09.292303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:39:09.287279 ignition[885]: Ignition finished successfully Feb 13 15:39:09.304146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:39:09.318559 ignition[891]: Ignition 2.20.0 Feb 13 15:39:09.318570 ignition[891]: Stage: kargs Feb 13 15:39:09.321515 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:39:09.318795 ignition[891]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.318809 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.319489 ignition[891]: kargs: kargs passed Feb 13 15:39:09.319531 ignition[891]: Ignition finished successfully Feb 13 15:39:09.337335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:39:09.350306 ignition[897]: Ignition 2.20.0 Feb 13 15:39:09.350317 ignition[897]: Stage: disks Feb 13 15:39:09.350534 ignition[897]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.350548 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.356535 ignition[897]: disks: disks passed Feb 13 15:39:09.358726 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:39:09.356581 ignition[897]: Ignition finished successfully Feb 13 15:39:09.361780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:39:09.370021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:39:09.378775 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:39:09.381267 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:39:09.388536 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:39:09.394111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:39:09.418662 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:39:09.422791 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:39:09.432127 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:39:09.521944 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:39:09.522666 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:39:09.527219 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:39:09.542994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:39:09.548697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:39:09.557915 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Feb 13 15:39:09.558625 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:39:09.569224 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:09.569252 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:09.569272 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:09.572450 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:39:09.573369 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:39:09.592296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:39:09.598550 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:09.599569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:39:09.606836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:39:09.750246 coreos-metadata[918]: Feb 13 15:39:09.750 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:39:09.756866 coreos-metadata[918]: Feb 13 15:39:09.756 INFO Fetch successful Feb 13 15:39:09.756866 coreos-metadata[918]: Feb 13 15:39:09.756 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:39:09.771676 coreos-metadata[918]: Feb 13 15:39:09.771 INFO Fetch successful Feb 13 15:39:09.774924 coreos-metadata[918]: Feb 13 15:39:09.774 INFO wrote hostname ci-4152.2.1-a-a4d4c6cb32 to /sysroot/etc/hostname Feb 13 15:39:09.780597 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:39:09.795776 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:39:09.806586 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:39:09.813422 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:39:09.823322 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:39:10.069966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:39:10.079030 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:39:10.084812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:39:10.096243 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:39:10.101918 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:10.121271 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:39:10.132074 ignition[1038]: INFO : Ignition 2.20.0 Feb 13 15:39:10.132074 ignition[1038]: INFO : Stage: mount Feb 13 15:39:10.136130 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:10.136130 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:10.136130 ignition[1038]: INFO : mount: mount passed Feb 13 15:39:10.136130 ignition[1038]: INFO : Ignition finished successfully Feb 13 15:39:10.141565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:39:10.160043 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:39:10.168585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:39:10.183929 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Feb 13 15:39:10.187916 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:10.187949 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:10.192291 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:10.197928 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:10.199223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:39:10.220832 ignition[1065]: INFO : Ignition 2.20.0 Feb 13 15:39:10.220832 ignition[1065]: INFO : Stage: files Feb 13 15:39:10.224734 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:10.224734 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:10.230182 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:39:10.238519 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:39:10.238519 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:39:10.263545 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:39:10.264096 unknown[1065]: wrote ssh authorized keys file for user: core Feb 13 15:39:10.779783 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:39:10.850058 systemd-networkd[875]: eth0: Gained IPv6LL Feb 13 15:39:11.087775 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:11.093427 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:39:11.093427 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:39:11.102148 ignition[1065]: INFO : files: files passed Feb 13 15:39:11.102148 ignition[1065]: INFO : Ignition finished successfully Feb 13 15:39:11.101828 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:39:11.111483 systemd-networkd[875]: enP6143s1: Gained IPv6LL Feb 13 15:39:11.119098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:39:11.125049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:39:11.134135 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:39:11.136056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:39:11.145112 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.145112 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.152974 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.158679 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:39:11.165010 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:39:11.177033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:39:11.204257 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:39:11.204374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:39:11.209578 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:39:11.215021 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:39:11.217448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:39:11.227058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:39:11.244261 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:39:11.257066 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:39:11.272874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:11.278772 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:11.284223 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:39:11.286631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:39:11.286749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:39:11.292002 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:39:11.296217 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:39:11.306119 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:39:11.308927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:39:11.314218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:39:11.319611 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:39:11.326957 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:39:11.329989 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:39:11.337379 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:39:11.343337 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:39:11.347257 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:39:11.347419 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:39:11.352552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:11.357917 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:11.363218 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:39:11.365714 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:11.368992 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:39:11.376640 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:39:11.382005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:39:11.384755 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:39:11.391307 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:39:11.393621 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:39:11.398465 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:39:11.401025 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:39:11.415213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:39:11.420412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:39:11.424649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:39:11.425303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:11.433084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:39:11.433186 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:39:11.446595 ignition[1117]: INFO : Ignition 2.20.0 Feb 13 15:39:11.446595 ignition[1117]: INFO : Stage: umount Feb 13 15:39:11.446595 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:11.446595 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:11.446595 ignition[1117]: INFO : umount: umount passed Feb 13 15:39:11.446595 ignition[1117]: INFO : Ignition finished successfully Feb 13 15:39:11.447886 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:39:11.447987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:39:11.463419 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:39:11.463742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:39:11.469502 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:39:11.469560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:39:11.476412 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:39:11.478496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:39:11.483097 systemd[1]: Stopped target network.target - Network. Feb 13 15:39:11.493020 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:39:11.493081 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:39:11.495821 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:39:11.498094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:39:11.505951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:11.509931 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:39:11.512005 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:39:11.514497 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:39:11.516443 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:39:11.526461 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:39:11.526522 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:39:11.531011 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:39:11.531071 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:39:11.535261 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:39:11.541346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:39:11.552953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:39:11.558955 systemd-networkd[875]: eth0: DHCPv6 lease lost Feb 13 15:39:11.559084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:39:11.567543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:39:11.570367 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:39:11.572594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:39:11.577329 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:39:11.577481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:39:11.583042 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:39:11.583139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:39:11.590536 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:39:11.590600 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:11.603062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:39:11.605366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:39:11.605417 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:39:11.614128 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:39:11.614185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:11.624266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:39:11.624325 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:11.629199 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:39:11.631558 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:11.639783 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:11.654578 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:39:11.657295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:11.660444 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:39:11.660487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:11.670614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:39:11.673148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:11.675692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:39:11.675746 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:39:11.684923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:39:11.684969 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:39:11.687679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:39:11.687720 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:11.704142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:39:11.711034 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: Data path switched from VF: enP6143s1 Feb 13 15:39:11.710644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:39:11.710716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:11.716155 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:39:11.716215 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:39:11.728182 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:39:11.728244 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:11.736133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:11.736194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:11.744247 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:39:11.746504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:39:11.749234 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:39:11.749311 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:39:13.112572 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:39:13.112712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:39:13.115570 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:39:13.119765 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:39:13.119833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:39:13.133087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:39:13.142281 systemd[1]: Switching root. Feb 13 15:39:13.179816 systemd-journald[177]: Journal stopped Feb 13 15:39:05.064574 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:39:05.064610 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.064624 kernel: BIOS-provided physical RAM map: Feb 13 15:39:05.064635 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:39:05.064644 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:39:05.064654 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:39:05.064666 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:39:05.064680 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:39:05.064690 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:39:05.064701 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:39:05.064783 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:39:05.064795 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:39:05.064806 kernel: NX (Execute Disable) protection: active Feb 13 15:39:05.064817 kernel: APIC: Static calls initialized Feb 13 15:39:05.064833 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:39:05.064846 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 RNG=0x3ffd1018 Feb 13 15:39:05.064858 kernel: random: crng init done Feb 13 15:39:05.064869 kernel: secureboot: Secure boot disabled Feb 13 15:39:05.064881 kernel: SMBIOS 3.1.0 present. Feb 13 15:39:05.064893 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:39:05.064904 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:39:05.064916 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:39:05.064932 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:39:05.064944 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:39:05.064959 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:39:05.064970 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:39:05.064982 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:39:05.064997 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:39:05.065010 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:39:05.065022 kernel: tsc: Detected 2593.905 MHz processor Feb 13 15:39:05.065034 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:39:05.065046 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:39:05.065058 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:39:05.065074 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:39:05.065086 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:39:05.065099 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:39:05.065111 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:39:05.065123 kernel: Using GB pages for direct mapping Feb 13 15:39:05.065135 kernel: ACPI: Early table checksum verification disabled Feb 13 15:39:05.065148 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:39:05.065165 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065184 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065203 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:39:05.065229 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:39:05.065251 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065265 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065279 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065296 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065309 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065323 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065337 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:39:05.065351 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:39:05.065364 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:39:05.065378 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:39:05.065392 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:39:05.065405 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:39:05.065421 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:39:05.065435 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:39:05.065448 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:39:05.065462 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:39:05.065475 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:39:05.065489 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:39:05.065503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:39:05.065516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:39:05.065530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:39:05.065546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:39:05.065560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:39:05.065573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:39:05.065587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:39:05.065601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:39:05.065615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:39:05.065628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:39:05.065642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:39:05.065658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:39:05.065672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:39:05.065686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:39:05.065699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:39:05.065737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:39:05.065751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:39:05.065765 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:39:05.065779 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:39:05.065793 kernel: Zone ranges: Feb 13 15:39:05.065809 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:39:05.065823 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:39:05.065836 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:39:05.065850 kernel: Movable zone start for each node Feb 13 15:39:05.065864 kernel: Early memory node ranges Feb 13 15:39:05.065877 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:39:05.065891 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:39:05.065905 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:39:05.065919 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:39:05.065935 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:39:05.065949 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:39:05.065962 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:39:05.065976 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:39:05.065989 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:39:05.066003 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:39:05.066017 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:39:05.066030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:39:05.066044 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:39:05.066060 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:39:05.066074 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:39:05.066087 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:39:05.066101 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:39:05.066116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:39:05.066130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:39:05.066144 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:39:05.066157 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:39:05.066171 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:39:05.066186 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:39:05.066200 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:39:05.066216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.066230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:39:05.066243 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:39:05.066257 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:39:05.066271 kernel: Fallback order for Node 0: 0 Feb 13 15:39:05.066284 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:39:05.066301 kernel: Policy zone: Normal Feb 13 15:39:05.066324 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:39:05.066339 kernel: software IO TLB: area num 2. Feb 13 15:39:05.066356 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Feb 13 15:39:05.066371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:39:05.066385 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:39:05.066400 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:39:05.066414 kernel: Dynamic Preempt: voluntary Feb 13 15:39:05.066429 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:39:05.066445 kernel: rcu: RCU event tracing is enabled. Feb 13 15:39:05.066460 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:39:05.066477 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:39:05.066492 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:39:05.066507 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:39:05.066522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:39:05.066536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:39:05.066551 kernel: Using NULL legacy PIC Feb 13 15:39:05.066568 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:39:05.066582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:39:05.066597 kernel: Console: colour dummy device 80x25 Feb 13 15:39:05.066611 kernel: printk: console [tty1] enabled Feb 13 15:39:05.066626 kernel: printk: console [ttyS0] enabled Feb 13 15:39:05.066640 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:39:05.066655 kernel: ACPI: Core revision 20230628 Feb 13 15:39:05.066669 kernel: Failed to register legacy timer interrupt Feb 13 15:39:05.066684 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:39:05.066701 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:39:05.066724 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:39:05.066739 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:39:05.066754 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:39:05.066769 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:39:05.066784 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:39:05.066798 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:39:05.066813 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:39:05.066828 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 13 15:39:05.066845 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:39:05.066858 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:39:05.066871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:39:05.066897 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:39:05.066925 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:39:05.066947 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:39:05.066959 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:39:05.066973 kernel: RETBleed: Vulnerable Feb 13 15:39:05.066986 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:39:05.067000 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:39:05.067017 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:39:05.067030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:39:05.067042 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:39:05.067055 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:39:05.067070 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:39:05.067085 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:39:05.067098 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:39:05.067110 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:39:05.067124 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:39:05.067136 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:39:05.067148 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:39:05.067172 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:39:05.067186 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:39:05.067198 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:39:05.067210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:39:05.067223 kernel: landlock: Up and running. Feb 13 15:39:05.067238 kernel: SELinux: Initializing. Feb 13 15:39:05.067249 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.067262 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.067276 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:39:05.067289 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067303 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067322 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:39:05.067335 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:39:05.067350 kernel: signal: max sigframe size: 3632 Feb 13 15:39:05.067365 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:39:05.067381 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:39:05.067396 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:39:05.067411 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:39:05.067426 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:39:05.067440 kernel: .... node #0, CPUs: #1 Feb 13 15:39:05.067458 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:39:05.067474 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:39:05.067490 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:39:05.067504 kernel: smpboot: Max logical packages: 1 Feb 13 15:39:05.067520 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 15:39:05.067535 kernel: devtmpfs: initialized Feb 13 15:39:05.067550 kernel: x86/mm: Memory block size: 128MB Feb 13 15:39:05.067565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:39:05.067583 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:39:05.067598 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:39:05.067613 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:39:05.067628 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:39:05.067644 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:39:05.067659 kernel: audit: type=2000 audit(1739461144.027:1): state=initialized audit_enabled=0 res=1 Feb 13 15:39:05.067674 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:39:05.067687 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:39:05.067700 kernel: cpuidle: using governor menu Feb 13 15:39:05.067729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:39:05.067743 kernel: dca service started, version 1.12.1 Feb 13 15:39:05.067762 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:39:05.067776 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:39:05.067791 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:39:05.067805 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:39:05.067819 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:39:05.067833 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:39:05.067847 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:39:05.067866 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:39:05.067880 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:39:05.067893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:39:05.067907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:39:05.067921 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:39:05.067935 kernel: ACPI: Interpreter enabled Feb 13 15:39:05.067950 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:39:05.067964 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:39:05.067977 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:39:05.067995 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:39:05.068009 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:39:05.068023 kernel: iommu: Default domain type: Translated Feb 13 15:39:05.068037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:39:05.068051 kernel: efivars: Registered efivars operations Feb 13 15:39:05.068065 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:39:05.068079 kernel: PCI: System does not support PCI Feb 13 15:39:05.068092 kernel: vgaarb: loaded Feb 13 15:39:05.068107 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:39:05.068124 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:39:05.068138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:39:05.068152 kernel: pnp: PnP ACPI init Feb 13 15:39:05.068166 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:39:05.068180 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:39:05.068195 kernel: NET: Registered PF_INET protocol family Feb 13 15:39:05.068208 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:39:05.068223 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:39:05.068237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:39:05.068253 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:39:05.068268 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:39:05.068281 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:39:05.068296 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.068310 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:39:05.068324 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:39:05.068338 kernel: NET: Registered PF_XDP protocol family Feb 13 15:39:05.068352 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:39:05.068366 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:39:05.068383 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Feb 13 15:39:05.068397 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:39:05.068411 kernel: Initialise system trusted keyrings Feb 13 15:39:05.068425 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:39:05.068438 kernel: Key type asymmetric registered Feb 13 15:39:05.068453 kernel: Asymmetric key parser 'x509' registered Feb 13 15:39:05.068466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:39:05.068480 kernel: io scheduler mq-deadline registered Feb 13 15:39:05.068494 kernel: io scheduler kyber registered Feb 13 15:39:05.068510 kernel: io scheduler bfq registered Feb 13 15:39:05.068524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:39:05.068539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:39:05.068552 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:39:05.068566 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:39:05.068580 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:39:05.068799 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:39:05.068940 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:39:04 UTC (1739461144) Feb 13 15:39:05.069064 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:39:05.069083 kernel: intel_pstate: CPU model not supported Feb 13 15:39:05.069098 kernel: efifb: probing for efifb Feb 13 15:39:05.069112 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:39:05.069126 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:39:05.069140 kernel: efifb: scrolling: redraw Feb 13 15:39:05.069155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:39:05.069169 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:39:05.069187 kernel: fb0: EFI VGA frame buffer device Feb 13 15:39:05.069201 kernel: pstore: Using crash dump compression: deflate Feb 13 15:39:05.069215 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:39:05.069230 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:39:05.069244 kernel: Segment Routing with IPv6 Feb 13 15:39:05.069258 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:39:05.069272 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:39:05.069286 kernel: Key type dns_resolver registered Feb 13 15:39:05.069300 kernel: IPI shorthand broadcast: enabled Feb 13 15:39:05.069315 kernel: sched_clock: Marking stable (813040000, 42382700)->(1053276100, -197853400) Feb 13 15:39:05.069332 kernel: registered taskstats version 1 Feb 13 15:39:05.069347 kernel: Loading compiled-in X.509 certificates Feb 13 15:39:05.069361 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:39:05.069374 kernel: Key type .fscrypt registered Feb 13 15:39:05.069389 kernel: Key type fscrypt-provisioning registered Feb 13 15:39:05.069403 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:39:05.069417 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:39:05.069431 kernel: ima: No architecture policies found Feb 13 15:39:05.069447 kernel: clk: Disabling unused clocks Feb 13 15:39:05.069462 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:39:05.069476 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:39:05.069490 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:39:05.069505 kernel: Run /init as init process Feb 13 15:39:05.069519 kernel: with arguments: Feb 13 15:39:05.069532 kernel: /init Feb 13 15:39:05.069546 kernel: with environment: Feb 13 15:39:05.069559 kernel: HOME=/ Feb 13 15:39:05.069573 kernel: TERM=linux Feb 13 15:39:05.069589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:39:05.069606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:39:05.069623 systemd[1]: Detected virtualization microsoft. Feb 13 15:39:05.069638 systemd[1]: Detected architecture x86-64. Feb 13 15:39:05.069652 systemd[1]: Running in initrd. Feb 13 15:39:05.069667 systemd[1]: No hostname configured, using default hostname. Feb 13 15:39:05.069680 systemd[1]: Hostname set to . Feb 13 15:39:05.069699 systemd[1]: Initializing machine ID from random generator. Feb 13 15:39:05.069730 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:39:05.069744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:05.069757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:05.069770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:39:05.069786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:39:05.069801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:39:05.069816 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:39:05.069836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:39:05.069851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:39:05.069867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:05.069882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:05.069897 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:39:05.069910 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:39:05.069935 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:39:05.069952 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:39:05.069967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:39:05.069982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:39:05.070000 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:39:05.070014 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:39:05.070030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:05.070046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:05.070062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:05.070082 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:39:05.070098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:39:05.070114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:39:05.070131 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:39:05.070146 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:39:05.070162 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:39:05.070178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:39:05.070218 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:39:05.070257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:05.070274 systemd-journald[177]: Journal started Feb 13 15:39:05.070306 systemd-journald[177]: Runtime Journal (/run/log/journal/ebd10bee0ef849e99b2328bbbbcfdfd4) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:39:05.067197 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:39:05.085832 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:39:05.083373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:39:05.090988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:05.098994 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:39:05.110090 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:39:05.114770 kernel: Bridge firewalling registered Feb 13 15:39:05.114392 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:39:05.117948 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:39:05.125233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:39:05.129754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:05.134811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:05.142780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:39:05.146319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:05.158936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:05.162845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:05.163519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:39:05.182975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:05.191626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:05.200899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:39:05.206667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:05.218897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:39:05.234469 dracut-cmdline[216]: dracut-dracut-053 Feb 13 15:39:05.239287 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:39:05.267133 systemd-resolved[209]: Positive Trust Anchors: Feb 13 15:39:05.267155 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:39:05.267209 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:39:05.292023 systemd-resolved[209]: Defaulting to hostname 'linux'. Feb 13 15:39:05.295403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:39:05.298199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:05.325733 kernel: SCSI subsystem initialized Feb 13 15:39:05.335732 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:39:05.346736 kernel: iscsi: registered transport (tcp) Feb 13 15:39:05.369034 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:39:05.369156 kernel: QLogic iSCSI HBA Driver Feb 13 15:39:05.405230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:39:05.414914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:39:05.444759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:39:05.444864 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:39:05.447860 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:39:05.487736 kernel: raid6: avx512x4 gen() 18531 MB/s Feb 13 15:39:05.506724 kernel: raid6: avx512x2 gen() 18536 MB/s Feb 13 15:39:05.525719 kernel: raid6: avx512x1 gen() 18470 MB/s Feb 13 15:39:05.544723 kernel: raid6: avx2x4 gen() 18498 MB/s Feb 13 15:39:05.564723 kernel: raid6: avx2x2 gen() 18462 MB/s Feb 13 15:39:05.585176 kernel: raid6: avx2x1 gen() 13920 MB/s Feb 13 15:39:05.585235 kernel: raid6: using algorithm avx512x2 gen() 18536 MB/s Feb 13 15:39:05.606406 kernel: raid6: .... xor() 30460 MB/s, rmw enabled Feb 13 15:39:05.606446 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:39:05.628738 kernel: xor: automatically using best checksumming function avx Feb 13 15:39:05.774737 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:39:05.784776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:39:05.794947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:05.809018 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:39:05.813510 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:05.828869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:39:05.842782 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 15:39:05.871910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:39:05.880869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:39:05.921575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:05.934971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:39:05.968867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:39:05.976455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:39:05.983023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:05.988662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:39:05.998925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:39:06.017731 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:39:06.021855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:39:06.024699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:06.032966 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:06.039790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:06.039979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:06.046295 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.065096 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:39:06.065122 kernel: AES CTR mode by8 optimization enabled Feb 13 15:39:06.061186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.068733 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:39:06.075152 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:39:06.093404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:06.115297 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:39:06.093706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:06.103972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:06.124415 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:39:06.124452 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:39:06.124465 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:39:06.139375 kernel: PTP clock support registered Feb 13 15:39:06.151570 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:39:06.151645 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:39:06.155862 kernel: scsi host1: storvsc_host_t Feb 13 15:39:06.155952 kernel: scsi host0: storvsc_host_t Feb 13 15:39:06.164747 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:39:06.164813 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:39:06.173748 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:39:06.173794 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:39:06.173829 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:39:06.181526 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:39:06.181587 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:39:07.106205 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:39:07.106387 systemd-resolved[209]: Clock change detected. Flushing caches. Feb 13 15:39:07.128896 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:39:07.128951 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:39:07.128974 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:39:07.129604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:07.142127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:39:07.166852 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:39:07.169645 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:39:07.169671 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:39:07.180583 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:07.194723 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:39:07.199270 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:39:07.199417 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:39:07.199558 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:39:07.199680 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:39:07.199798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:07.199812 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:39:07.301173 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: VF slot 1 added Feb 13 15:39:07.309049 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:39:07.313926 kernel: hv_pci 117ae9b3-17ff-49b4-87bc-308f83f37065: PCI VMBus probing: Using version 0x10004 Feb 13 15:39:07.391979 kernel: hv_pci 117ae9b3-17ff-49b4-87bc-308f83f37065: PCI host bridge to bus 17ff:00 Feb 13 15:39:07.392121 kernel: pci_bus 17ff:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:39:07.392251 kernel: pci_bus 17ff:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:39:07.392368 kernel: pci 17ff:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:39:07.392496 kernel: pci 17ff:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:39:07.392611 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (443) Feb 13 15:39:07.392627 kernel: pci 17ff:00:02.0: enabling Extended Tags Feb 13 15:39:07.392746 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (460) Feb 13 15:39:07.392758 kernel: pci 17ff:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 17ff:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:39:07.392869 kernel: pci_bus 17ff:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:39:07.392991 kernel: pci 17ff:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:39:07.332268 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:39:07.388186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:39:07.407274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:39:07.414345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:39:07.430633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:39:07.452068 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:39:07.476947 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:07.648929 kernel: mlx5_core 17ff:00:02.0: enabling device (0000 -> 0002) Feb 13 15:39:07.892463 kernel: mlx5_core 17ff:00:02.0: firmware version: 14.30.5000 Feb 13 15:39:07.892714 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: VF registering: eth1 Feb 13 15:39:07.892877 kernel: mlx5_core 17ff:00:02.0 eth1: joined to eth0 Feb 13 15:39:07.893087 kernel: mlx5_core 17ff:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:39:07.898923 kernel: mlx5_core 17ff:00:02.0 enP6143s1: renamed from eth1 Feb 13 15:39:08.496925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:39:08.498286 disk-uuid[596]: The operation has completed successfully. Feb 13 15:39:08.584181 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:39:08.584299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:39:08.600081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:39:08.605688 sh[691]: Success Feb 13 15:39:08.624208 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:39:08.693980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:39:08.713048 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:39:08.716733 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:39:08.743917 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:39:08.743974 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:08.749089 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:39:08.749914 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:39:08.754881 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:39:08.813428 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:39:08.820618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:39:08.830056 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:39:08.836136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:39:08.853407 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:08.853473 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:08.855101 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:08.865160 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:08.875280 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:39:08.881074 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:08.888515 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:39:08.898082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:39:08.965111 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:39:08.974144 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:39:09.006403 systemd-networkd[875]: lo: Link UP Feb 13 15:39:09.006414 systemd-networkd[875]: lo: Gained carrier Feb 13 15:39:09.008600 systemd-networkd[875]: Enumeration completed Feb 13 15:39:09.008875 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:39:09.012840 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:09.012844 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:39:09.015296 systemd[1]: Reached target network.target - Network. Feb 13 15:39:09.075928 kernel: mlx5_core 17ff:00:02.0 enP6143s1: Link up Feb 13 15:39:09.115096 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: Data path switched to VF: enP6143s1 Feb 13 15:39:09.116696 systemd-networkd[875]: enP6143s1: Link UP Feb 13 15:39:09.116817 systemd-networkd[875]: eth0: Link UP Feb 13 15:39:09.121182 systemd-networkd[875]: eth0: Gained carrier Feb 13 15:39:09.121199 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:09.132378 systemd-networkd[875]: enP6143s1: Gained carrier Feb 13 15:39:09.139506 ignition[792]: Ignition 2.20.0 Feb 13 15:39:09.139518 ignition[792]: Stage: fetch-offline Feb 13 15:39:09.139559 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.139571 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.139816 ignition[792]: parsed url from cmdline: "" Feb 13 15:39:09.139822 ignition[792]: no config URL provided Feb 13 15:39:09.139830 ignition[792]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:39:09.139842 ignition[792]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:39:09.139848 ignition[792]: failed to fetch config: resource requires networking Feb 13 15:39:09.141102 ignition[792]: Ignition finished successfully Feb 13 15:39:09.156790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:39:09.158998 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:39:09.174069 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:39:09.185981 ignition[885]: Ignition 2.20.0 Feb 13 15:39:09.185992 ignition[885]: Stage: fetch Feb 13 15:39:09.186209 ignition[885]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.186222 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.186293 ignition[885]: parsed url from cmdline: "" Feb 13 15:39:09.186296 ignition[885]: no config URL provided Feb 13 15:39:09.186300 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:39:09.186306 ignition[885]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:39:09.187836 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:39:09.281149 ignition[885]: GET result: OK Feb 13 15:39:09.281255 ignition[885]: config has been read from IMDS userdata Feb 13 15:39:09.281279 ignition[885]: parsing config with SHA512: 57fcb41024fc28c1f6c968b6ef91f4a964e3b23c1697250f6b11a0341062a62eeffb135f9f7f27f8b43f175fe7819e2d6371a39f03b16d8ace7f4bb6a381439a Feb 13 15:39:09.286243 unknown[885]: fetched base config from "system" Feb 13 15:39:09.286953 unknown[885]: fetched base config from "system" Feb 13 15:39:09.287215 ignition[885]: fetch: fetch complete Feb 13 15:39:09.286959 unknown[885]: fetched user config from "azure" Feb 13 15:39:09.287220 ignition[885]: fetch: fetch passed Feb 13 15:39:09.292303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:39:09.287279 ignition[885]: Ignition finished successfully Feb 13 15:39:09.304146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:39:09.318559 ignition[891]: Ignition 2.20.0 Feb 13 15:39:09.318570 ignition[891]: Stage: kargs Feb 13 15:39:09.321515 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:39:09.318795 ignition[891]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.318809 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.319489 ignition[891]: kargs: kargs passed Feb 13 15:39:09.319531 ignition[891]: Ignition finished successfully Feb 13 15:39:09.337335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:39:09.350306 ignition[897]: Ignition 2.20.0 Feb 13 15:39:09.350317 ignition[897]: Stage: disks Feb 13 15:39:09.350534 ignition[897]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:09.350548 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:09.356535 ignition[897]: disks: disks passed Feb 13 15:39:09.358726 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:39:09.356581 ignition[897]: Ignition finished successfully Feb 13 15:39:09.361780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:39:09.370021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:39:09.378775 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:39:09.381267 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:39:09.388536 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:39:09.394111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:39:09.418662 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:39:09.422791 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:39:09.432127 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:39:09.521944 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:39:09.522666 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:39:09.527219 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:39:09.542994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:39:09.548697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:39:09.557915 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Feb 13 15:39:09.558625 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:39:09.569224 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:09.569252 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:09.569272 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:09.572450 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:39:09.573369 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:39:09.592296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:39:09.598550 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:09.599569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:39:09.606836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:39:09.750246 coreos-metadata[918]: Feb 13 15:39:09.750 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:39:09.756866 coreos-metadata[918]: Feb 13 15:39:09.756 INFO Fetch successful Feb 13 15:39:09.756866 coreos-metadata[918]: Feb 13 15:39:09.756 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:39:09.771676 coreos-metadata[918]: Feb 13 15:39:09.771 INFO Fetch successful Feb 13 15:39:09.774924 coreos-metadata[918]: Feb 13 15:39:09.774 INFO wrote hostname ci-4152.2.1-a-a4d4c6cb32 to /sysroot/etc/hostname Feb 13 15:39:09.780597 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:39:09.795776 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:39:09.806586 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:39:09.813422 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:39:09.823322 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:39:10.069966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:39:10.079030 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:39:10.084812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:39:10.096243 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:39:10.101918 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:10.121271 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:39:10.132074 ignition[1038]: INFO : Ignition 2.20.0 Feb 13 15:39:10.132074 ignition[1038]: INFO : Stage: mount Feb 13 15:39:10.136130 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:10.136130 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:10.136130 ignition[1038]: INFO : mount: mount passed Feb 13 15:39:10.136130 ignition[1038]: INFO : Ignition finished successfully Feb 13 15:39:10.141565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:39:10.160043 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:39:10.168585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:39:10.183929 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Feb 13 15:39:10.187916 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:39:10.187949 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:39:10.192291 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:39:10.197928 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:39:10.199223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:39:10.220832 ignition[1065]: INFO : Ignition 2.20.0 Feb 13 15:39:10.220832 ignition[1065]: INFO : Stage: files Feb 13 15:39:10.224734 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:10.224734 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:10.230182 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:39:10.238519 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:39:10.238519 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:39:10.263545 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:10.267368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:39:10.264096 unknown[1065]: wrote ssh authorized keys file for user: core Feb 13 15:39:10.779783 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:39:10.850058 systemd-networkd[875]: eth0: Gained IPv6LL Feb 13 15:39:11.087775 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:39:11.093427 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:39:11.093427 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:39:11.102148 ignition[1065]: INFO : files: files passed Feb 13 15:39:11.102148 ignition[1065]: INFO : Ignition finished successfully Feb 13 15:39:11.101828 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:39:11.111483 systemd-networkd[875]: enP6143s1: Gained IPv6LL Feb 13 15:39:11.119098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:39:11.125049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:39:11.134135 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:39:11.136056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:39:11.145112 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.145112 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.152974 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:39:11.158679 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:39:11.165010 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:39:11.177033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:39:11.204257 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:39:11.204374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:39:11.209578 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:39:11.215021 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:39:11.217448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:39:11.227058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:39:11.244261 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:39:11.257066 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:39:11.272874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:11.278772 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:11.284223 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:39:11.286631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:39:11.286749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:39:11.292002 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:39:11.296217 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:39:11.306119 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:39:11.308927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:39:11.314218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:39:11.319611 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:39:11.326957 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:39:11.329989 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:39:11.337379 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:39:11.343337 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:39:11.347257 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:39:11.347419 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:39:11.352552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:11.357917 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:11.363218 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:39:11.365714 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:11.368992 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:39:11.376640 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:39:11.382005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:39:11.384755 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:39:11.391307 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:39:11.393621 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:39:11.398465 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:39:11.401025 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:39:11.415213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:39:11.420412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:39:11.424649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:39:11.425303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:11.433084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:39:11.433186 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:39:11.446595 ignition[1117]: INFO : Ignition 2.20.0 Feb 13 15:39:11.446595 ignition[1117]: INFO : Stage: umount Feb 13 15:39:11.446595 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:39:11.446595 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:39:11.446595 ignition[1117]: INFO : umount: umount passed Feb 13 15:39:11.446595 ignition[1117]: INFO : Ignition finished successfully Feb 13 15:39:11.447886 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:39:11.447987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:39:11.463419 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:39:11.463742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:39:11.469502 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:39:11.469560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:39:11.476412 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:39:11.478496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:39:11.483097 systemd[1]: Stopped target network.target - Network. Feb 13 15:39:11.493020 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:39:11.493081 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:39:11.495821 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:39:11.498094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:39:11.505951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:11.509931 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:39:11.512005 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:39:11.514497 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:39:11.516443 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:39:11.526461 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:39:11.526522 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:39:11.531011 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:39:11.531071 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:39:11.535261 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:39:11.541346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:39:11.552953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:39:11.558955 systemd-networkd[875]: eth0: DHCPv6 lease lost Feb 13 15:39:11.559084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:39:11.567543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:39:11.570367 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:39:11.572594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:39:11.577329 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:39:11.577481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:39:11.583042 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:39:11.583139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:39:11.590536 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:39:11.590600 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:11.603062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:39:11.605366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:39:11.605417 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:39:11.614128 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:39:11.614185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:11.624266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:39:11.624325 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:11.629199 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:39:11.631558 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:11.639783 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:11.654578 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:39:11.657295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:11.660444 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:39:11.660487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:11.670614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:39:11.673148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:11.675692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:39:11.675746 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:39:11.684923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:39:11.684969 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:39:11.687679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:39:11.687720 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:39:11.704142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:39:11.711034 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: Data path switched from VF: enP6143s1 Feb 13 15:39:11.710644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:39:11.710716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:11.716155 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:39:11.716215 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:39:11.728182 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:39:11.728244 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:11.736133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:11.736194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:11.744247 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:39:11.746504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:39:11.749234 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:39:11.749311 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:39:13.112572 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:39:13.112712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:39:13.115570 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:39:13.119765 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:39:13.119833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:39:13.133087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:39:13.142281 systemd[1]: Switching root. Feb 13 15:39:13.179816 systemd-journald[177]: Journal stopped Feb 13 15:39:17.516839 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 15:39:17.516883 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:39:17.516923 kernel: SELinux: policy capability open_perms=1 Feb 13 15:39:17.516939 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:39:17.516951 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:39:17.516965 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:39:17.516980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:39:17.516998 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:39:17.517012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:39:17.517026 kernel: audit: type=1403 audit(1739461156.092:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:39:17.517041 systemd[1]: Successfully loaded SELinux policy in 76.330ms. Feb 13 15:39:17.517057 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.344ms. Feb 13 15:39:17.517073 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:39:17.517089 systemd[1]: Detected virtualization microsoft. Feb 13 15:39:17.517110 systemd[1]: Detected architecture x86-64. Feb 13 15:39:17.517127 systemd[1]: Detected first boot. Feb 13 15:39:17.517144 systemd[1]: Hostname set to . Feb 13 15:39:17.517163 systemd[1]: Initializing machine ID from random generator. Feb 13 15:39:17.517179 zram_generator::config[1159]: No configuration found. Feb 13 15:39:17.517199 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:39:17.517215 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:39:17.517231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:39:17.517246 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:39:17.517263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:39:17.517279 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:39:17.517296 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:39:17.517316 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:39:17.517332 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:39:17.517349 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:39:17.517365 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:39:17.517381 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:39:17.517398 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:17.517415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:17.517431 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:39:17.517450 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:39:17.517467 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:39:17.517484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:39:17.517501 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:39:17.517519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:17.517537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:39:17.517558 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:39:17.517576 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:39:17.517597 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:39:17.517614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:17.517631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:39:17.517649 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:39:17.517666 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:39:17.517683 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:39:17.517700 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:39:17.517720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:17.517738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:17.517757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:17.517775 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:39:17.517792 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:39:17.517813 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:39:17.517831 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:39:17.517849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:17.517867 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:39:17.517885 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:39:17.517920 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:39:17.517937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:39:17.517962 systemd[1]: Reached target machines.target - Containers. Feb 13 15:39:17.517983 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:39:17.517999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:17.518016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:39:17.518034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:39:17.518052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:17.518070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:39:17.518086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:17.518102 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:39:17.518120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:17.518142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:39:17.518161 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:39:17.518180 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:39:17.518197 kernel: fuse: init (API version 7.39) Feb 13 15:39:17.518214 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:39:17.518233 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:39:17.518251 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:39:17.518269 kernel: loop: module loaded Feb 13 15:39:17.518289 kernel: ACPI: bus type drm_connector registered Feb 13 15:39:17.518307 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:39:17.518326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:39:17.518374 systemd-journald[1262]: Collecting audit messages is disabled. Feb 13 15:39:17.518411 systemd-journald[1262]: Journal started Feb 13 15:39:17.518444 systemd-journald[1262]: Runtime Journal (/run/log/journal/c0cc0e3899a64e7ab244acd251ca717a) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:39:17.519351 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:39:16.963615 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:39:17.006895 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:39:17.007307 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:39:17.536941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:39:17.542434 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:39:17.542504 systemd[1]: Stopped verity-setup.service. Feb 13 15:39:17.551938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:17.560477 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:39:17.561479 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:39:17.564419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:39:17.567551 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:39:17.570466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:39:17.573538 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:39:17.576742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:39:17.579663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:39:17.583236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:17.587084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:39:17.587399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:39:17.590887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:17.591317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:17.594610 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:39:17.595012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:39:17.598485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:17.598765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:17.602364 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:39:17.602596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:39:17.605801 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:17.606079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:17.609156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:17.612266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:39:17.615679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:39:17.618662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:17.630889 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:39:17.639099 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:39:17.645279 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:39:17.648265 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:39:17.648315 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:39:17.651793 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:39:17.659063 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:39:17.662803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:39:17.667064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:17.682114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:39:17.690270 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:39:17.693157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:39:17.698357 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:39:17.701389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:39:17.705408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:17.716055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:39:17.722858 systemd-journald[1262]: Time spent on flushing to /var/log/journal/c0cc0e3899a64e7ab244acd251ca717a is 69.333ms for 939 entries. Feb 13 15:39:17.722858 systemd-journald[1262]: System Journal (/var/log/journal/c0cc0e3899a64e7ab244acd251ca717a) is 8.0M, max 2.6G, 2.6G free. Feb 13 15:39:21.347684 systemd-journald[1262]: Received client request to flush runtime journal. Feb 13 15:39:21.347783 kernel: loop0: detected capacity change from 0 to 211296 Feb 13 15:39:21.347813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:39:21.347833 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:39:21.347930 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:39:17.729152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:39:17.735284 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:39:17.743234 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:39:17.749382 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:39:17.752479 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:39:17.758818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:39:17.765399 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:39:17.779088 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:39:17.812094 udevadm[1298]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:39:17.832415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:18.364267 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Feb 13 15:39:18.364286 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Feb 13 15:39:18.371055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:39:18.380141 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:39:19.356596 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:39:19.365195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:39:19.383674 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Feb 13 15:39:19.383688 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Feb 13 15:39:19.387552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:21.349614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:39:21.594689 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:39:21.595399 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:39:21.669528 kernel: loop3: detected capacity change from 0 to 28272 Feb 13 15:39:21.817556 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:39:21.825130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:21.836927 kernel: loop4: detected capacity change from 0 to 211296 Feb 13 15:39:21.849166 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 15:39:21.867585 kernel: loop6: detected capacity change from 0 to 140992 Feb 13 15:39:21.870603 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Feb 13 15:39:21.893173 kernel: loop7: detected capacity change from 0 to 28272 Feb 13 15:39:21.897886 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:39:21.899271 (sd-merge)[1325]: Merged extensions into '/usr'. Feb 13 15:39:21.902963 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:39:21.902979 systemd[1]: Reloading... Feb 13 15:39:21.994952 zram_generator::config[1349]: No configuration found. Feb 13 15:39:22.234735 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:39:22.256926 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:39:22.262243 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:39:22.273821 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:39:22.273897 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:39:22.273949 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:39:22.279976 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:39:22.286924 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:39:22.359150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:22.523101 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:39:22.523227 systemd[1]: Reloading finished in 619 ms. Feb 13 15:39:22.566898 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:22.572387 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:39:22.632951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1372) Feb 13 15:39:22.693329 systemd[1]: Starting ensure-sysext.service... Feb 13 15:39:22.703121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:39:22.709096 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:39:22.718297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:22.734778 systemd[1]: Reloading requested from client PID 1462 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:39:22.734796 systemd[1]: Reloading... Feb 13 15:39:22.828558 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 15:39:22.886038 systemd-tmpfiles[1477]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:39:22.886578 systemd-tmpfiles[1477]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:39:22.901959 systemd-tmpfiles[1477]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:39:22.904369 systemd-tmpfiles[1477]: ACLs are not supported, ignoring. Feb 13 15:39:22.905145 systemd-tmpfiles[1477]: ACLs are not supported, ignoring. Feb 13 15:39:22.920204 zram_generator::config[1528]: No configuration found. Feb 13 15:39:22.925817 systemd-tmpfiles[1477]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:39:22.925831 systemd-tmpfiles[1477]: Skipping /boot Feb 13 15:39:22.961871 systemd-tmpfiles[1477]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:39:22.962962 systemd-tmpfiles[1477]: Skipping /boot Feb 13 15:39:23.130102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:23.209095 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:39:23.219184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:39:23.223686 systemd[1]: Reloading finished in 488 ms. Feb 13 15:39:23.242184 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:39:23.249415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:23.252895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:23.283422 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:39:23.290397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:23.295181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:23.299119 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:39:23.302318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:23.308999 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:39:23.313583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:23.326154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:23.335235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:23.338049 lvm[1606]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:39:23.338756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:23.346257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:39:23.361584 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:39:23.370400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:39:23.383955 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:39:23.396536 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:39:23.402050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:39:23.402260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:23.405741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:23.415300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:23.421452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:23.425958 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:39:23.431306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:23.431506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:23.434880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:23.435112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:23.441514 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:23.441972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:23.447214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:39:23.463186 augenrules[1639]: No rules Feb 13 15:39:23.465833 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:23.466800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:23.487219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:23.491189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:23.502210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:23.503496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:23.508009 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:39:23.520252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:23.524241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:39:23.532310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:23.544207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:23.548052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:23.548345 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:39:23.554262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:23.556747 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:39:23.565896 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:39:23.571469 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:39:23.574618 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:39:23.575966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:23.580428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:23.581059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:23.586530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:23.586700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:23.605639 systemd[1]: Finished ensure-sysext.service. Feb 13 15:39:23.611449 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:23.611929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:23.627704 augenrules[1650]: /sbin/augenrules: No change Feb 13 15:39:23.620821 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:39:23.624572 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:39:23.624743 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:39:23.632170 augenrules[1685]: No rules Feb 13 15:39:23.633654 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:23.634596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:23.641813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:39:23.644157 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:39:23.649783 systemd-networkd[1476]: lo: Link UP Feb 13 15:39:23.649797 systemd-networkd[1476]: lo: Gained carrier Feb 13 15:39:23.653420 systemd-networkd[1476]: Enumeration completed Feb 13 15:39:23.654694 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:23.654703 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:39:23.655038 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:39:23.658455 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:39:23.671023 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:39:23.684327 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:39:23.688157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:39:23.698459 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:39:23.710085 systemd-resolved[1618]: Positive Trust Anchors: Feb 13 15:39:23.710098 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:39:23.710136 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:39:23.711930 kernel: mlx5_core 17ff:00:02.0 enP6143s1: Link up Feb 13 15:39:23.718165 systemd-resolved[1618]: Using system hostname 'ci-4152.2.1-a-a4d4c6cb32'. Feb 13 15:39:23.735918 kernel: hv_netvsc 6045bde0-b7e2-6045-bde0-b7e26045bde0 eth0: Data path switched to VF: enP6143s1 Feb 13 15:39:23.738485 systemd-networkd[1476]: enP6143s1: Link UP Feb 13 15:39:23.738707 systemd-networkd[1476]: eth0: Link UP Feb 13 15:39:23.738716 systemd-networkd[1476]: eth0: Gained carrier Feb 13 15:39:23.738756 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:23.742268 systemd-networkd[1476]: enP6143s1: Gained carrier Feb 13 15:39:23.743459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:39:23.746370 systemd[1]: Reached target network.target - Network. Feb 13 15:39:23.748588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:23.751716 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:39:23.754399 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:39:23.757248 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:39:23.760486 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:39:23.764786 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:39:23.768817 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:39:23.772018 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:39:23.772060 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:39:23.774211 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:39:23.777391 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:39:23.781673 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:39:23.793348 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:39:23.796621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:39:23.799289 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:39:23.801565 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:39:23.803646 systemd-networkd[1476]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:39:23.803844 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:23.803876 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:23.814017 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:39:23.818041 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:39:23.832050 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:39:23.839087 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:39:23.849996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:39:23.855080 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:39:23.858123 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:39:23.858174 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:39:23.864892 jq[1705]: false Feb 13 15:39:23.865110 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:39:23.871292 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:39:23.877141 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:39:23.884565 (chronyd)[1701]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:39:23.890173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:39:23.892678 KVP[1710]: KVP starting; pid is:1710 Feb 13 15:39:23.898928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:39:23.900975 chronyd[1716]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:39:23.910393 chronyd[1716]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:39:23.910817 chronyd[1716]: Loaded seccomp filter (level 2) Feb 13 15:39:23.913302 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:39:23.916493 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:39:23.917147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:39:23.918084 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:39:23.923710 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:39:23.927202 dbus-daemon[1704]: [system] SELinux support is enabled Feb 13 15:39:23.929854 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:39:23.943358 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:39:23.942062 KVP[1710]: KVP LIC Version: 3.1 Feb 13 15:39:23.943386 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:39:23.954363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:39:23.955325 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:39:23.955703 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:39:23.956945 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:39:23.981531 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:39:23.981840 jq[1720]: true Feb 13 15:39:23.982300 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:39:23.982348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:39:23.987596 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:39:23.987628 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:39:23.997554 extend-filesystems[1709]: Found loop4 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found loop5 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found loop6 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found loop7 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda1 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda2 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda3 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found usr Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda4 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda6 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda7 Feb 13 15:39:23.997554 extend-filesystems[1709]: Found sda9 Feb 13 15:39:23.997554 extend-filesystems[1709]: Checking size of /dev/sda9 Feb 13 15:39:24.094665 update_engine[1719]: I20250213 15:39:24.059687 1719 main.cc:92] Flatcar Update Engine starting Feb 13 15:39:24.094665 update_engine[1719]: I20250213 15:39:24.087270 1719 update_check_scheduler.cc:74] Next update check in 6m34s Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.058 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.062 INFO Fetch successful Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.064 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.071 INFO Fetch successful Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.083 INFO Fetching http://168.63.129.16/machine/dbde177d-81e6-49fe-9ff9-cad4989e55fb/2002151a%2D398b%2D4c39%2D9859%2D94bc5b456517.%5Fci%2D4152.2.1%2Da%2Da4d4c6cb32?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.092 INFO Fetch successful Feb 13 15:39:24.095027 coreos-metadata[1703]: Feb 13 15:39:24.092 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:39:24.055314 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:39:24.095651 extend-filesystems[1709]: Old size kept for /dev/sda9 Feb 13 15:39:24.095651 extend-filesystems[1709]: Found sr0 Feb 13 15:39:24.055581 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:39:24.099990 jq[1736]: true Feb 13 15:39:24.061010 systemd-logind[1717]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:39:24.061631 systemd-logind[1717]: New seat seat0. Feb 13 15:39:24.062701 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:39:24.064009 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:39:24.080578 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:39:24.102160 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:39:24.111214 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:39:24.114602 coreos-metadata[1703]: Feb 13 15:39:24.114 INFO Fetch successful Feb 13 15:39:24.200007 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:39:24.207930 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1372) Feb 13 15:39:24.210796 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:39:24.277745 bash[1774]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:39:24.279419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:39:24.292636 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:39:24.406813 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:39:24.584740 containerd[1726]: time="2025-02-13T15:39:24.583720600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:39:24.611605 sshd_keygen[1744]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:39:24.630502 containerd[1726]: time="2025-02-13T15:39:24.630457600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632249400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632284000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632304500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632465700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632485800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632559400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632575400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632775900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632795800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632813800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:24.633671 containerd[1726]: time="2025-02-13T15:39:24.632826600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.632938200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.633164000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.633316800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.633342600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.633459100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:39:24.634133 containerd[1726]: time="2025-02-13T15:39:24.633533000Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:39:24.640168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:39:24.648249 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:39:24.651480 containerd[1726]: time="2025-02-13T15:39:24.651444000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:39:24.651557 containerd[1726]: time="2025-02-13T15:39:24.651508200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:39:24.651557 containerd[1726]: time="2025-02-13T15:39:24.651532800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:39:24.651633 containerd[1726]: time="2025-02-13T15:39:24.651554300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:39:24.651633 containerd[1726]: time="2025-02-13T15:39:24.651575000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:39:24.651767 containerd[1726]: time="2025-02-13T15:39:24.651741400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:39:24.652680 containerd[1726]: time="2025-02-13T15:39:24.652638600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:39:24.652837 containerd[1726]: time="2025-02-13T15:39:24.652812500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:39:24.652911 containerd[1726]: time="2025-02-13T15:39:24.652850300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:39:24.652911 containerd[1726]: time="2025-02-13T15:39:24.652881300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:39:24.652989 containerd[1726]: time="2025-02-13T15:39:24.652923100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.652989 containerd[1726]: time="2025-02-13T15:39:24.652948700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.652989 containerd[1726]: time="2025-02-13T15:39:24.652974400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653092 containerd[1726]: time="2025-02-13T15:39:24.653000300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653092 containerd[1726]: time="2025-02-13T15:39:24.653027100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653092 containerd[1726]: time="2025-02-13T15:39:24.653053200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653092 containerd[1726]: time="2025-02-13T15:39:24.653078100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653223 containerd[1726]: time="2025-02-13T15:39:24.653101300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:39:24.653223 containerd[1726]: time="2025-02-13T15:39:24.653211300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653298 containerd[1726]: time="2025-02-13T15:39:24.653240400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653298 containerd[1726]: time="2025-02-13T15:39:24.653264300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653298 containerd[1726]: time="2025-02-13T15:39:24.653289200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653410 containerd[1726]: time="2025-02-13T15:39:24.653315600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653410 containerd[1726]: time="2025-02-13T15:39:24.653340200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653410 containerd[1726]: time="2025-02-13T15:39:24.653360100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653410 containerd[1726]: time="2025-02-13T15:39:24.653383100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653541 containerd[1726]: time="2025-02-13T15:39:24.653409400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653541 containerd[1726]: time="2025-02-13T15:39:24.653436600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653541 containerd[1726]: time="2025-02-13T15:39:24.653463400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653654 containerd[1726]: time="2025-02-13T15:39:24.653486400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653654 containerd[1726]: time="2025-02-13T15:39:24.653569400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653654 containerd[1726]: time="2025-02-13T15:39:24.653594800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:39:24.653654 containerd[1726]: time="2025-02-13T15:39:24.653630400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653786 containerd[1726]: time="2025-02-13T15:39:24.653656900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.653786 containerd[1726]: time="2025-02-13T15:39:24.653678800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657012800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657061200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657087300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657112200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657131000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657154000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657171000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:39:24.657682 containerd[1726]: time="2025-02-13T15:39:24.657192900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:39:24.658019 containerd[1726]: time="2025-02-13T15:39:24.657641000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:39:24.658019 containerd[1726]: time="2025-02-13T15:39:24.657714600Z" level=info msg="Connect containerd service" Feb 13 15:39:24.658019 containerd[1726]: time="2025-02-13T15:39:24.657781700Z" level=info msg="using legacy CRI server" Feb 13 15:39:24.658019 containerd[1726]: time="2025-02-13T15:39:24.657793800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:39:24.659313 containerd[1726]: time="2025-02-13T15:39:24.658520700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:39:24.660716 containerd[1726]: time="2025-02-13T15:39:24.660207200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:39:24.660668 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:39:24.660981 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:39:24.662364 containerd[1726]: time="2025-02-13T15:39:24.662344500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:39:24.662778 containerd[1726]: time="2025-02-13T15:39:24.662761900Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:39:24.662864 containerd[1726]: time="2025-02-13T15:39:24.662697000Z" level=info msg="Start subscribing containerd event" Feb 13 15:39:24.662939 containerd[1726]: time="2025-02-13T15:39:24.662929100Z" level=info msg="Start recovering state" Feb 13 15:39:24.663039 containerd[1726]: time="2025-02-13T15:39:24.663030000Z" level=info msg="Start event monitor" Feb 13 15:39:24.663103 containerd[1726]: time="2025-02-13T15:39:24.663094100Z" level=info msg="Start snapshots syncer" Feb 13 15:39:24.663154 containerd[1726]: time="2025-02-13T15:39:24.663145400Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:39:24.663193 containerd[1726]: time="2025-02-13T15:39:24.663186500Z" level=info msg="Start streaming server" Feb 13 15:39:24.663293 containerd[1726]: time="2025-02-13T15:39:24.663282100Z" level=info msg="containerd successfully booted in 0.080759s" Feb 13 15:39:24.664896 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:39:24.675796 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:39:24.689984 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:39:24.701271 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:39:24.705295 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:39:24.709062 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:39:25.058273 systemd-networkd[1476]: eth0: Gained IPv6LL Feb 13 15:39:25.061896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:39:25.065978 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:39:25.075121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:25.080266 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:39:25.089139 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:39:25.118200 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:39:25.141132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:39:25.443553 systemd-networkd[1476]: enP6143s1: Gained IPv6LL Feb 13 15:39:25.821119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:25.825415 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:39:25.831143 systemd[1]: Startup finished in 598ms (firmware) + 8.182s (loader) + 951ms (kernel) + 10.401s (initrd) + 9.813s (userspace) = 29.948s. Feb 13 15:39:25.847557 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:26.001163 login[1846]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:39:26.002863 login[1847]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:39:26.016492 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:39:26.023305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:39:26.030419 systemd-logind[1717]: New session 1 of user core. Feb 13 15:39:26.039457 systemd-logind[1717]: New session 2 of user core. Feb 13 15:39:26.059050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:39:26.069023 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:39:26.072345 waagent[1860]: 2025-02-13T15:39:26.071457Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:39:26.075454 waagent[1860]: 2025-02-13T15:39:26.075250Z INFO Daemon Daemon OS: flatcar 4152.2.1 Feb 13 15:39:26.079854 waagent[1860]: 2025-02-13T15:39:26.077534Z INFO Daemon Daemon Python: 3.11.10 Feb 13 15:39:26.079974 waagent[1860]: 2025-02-13T15:39:26.079895Z INFO Daemon Daemon Run daemon Feb 13 15:39:26.082820 waagent[1860]: 2025-02-13T15:39:26.082447Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.1' Feb 13 15:39:26.082658 (systemd)[1882]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:39:26.086541 waagent[1860]: 2025-02-13T15:39:26.086472Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:39:26.089879 waagent[1860]: 2025-02-13T15:39:26.089098Z INFO Daemon Daemon Activate resource disk Feb 13 15:39:26.093931 waagent[1860]: 2025-02-13T15:39:26.091288Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:39:26.103940 waagent[1860]: 2025-02-13T15:39:26.103122Z INFO Daemon Daemon Found device: None Feb 13 15:39:26.105953 waagent[1860]: 2025-02-13T15:39:26.105543Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:39:26.109302 waagent[1860]: 2025-02-13T15:39:26.109239Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:39:26.114578 waagent[1860]: 2025-02-13T15:39:26.114479Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:39:26.117244 waagent[1860]: 2025-02-13T15:39:26.117068Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:39:26.133825 waagent[1860]: 2025-02-13T15:39:26.132874Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:39:26.140832 waagent[1860]: 2025-02-13T15:39:26.140775Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:39:26.143710 waagent[1860]: 2025-02-13T15:39:26.143660Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:39:26.144485 waagent[1860]: 2025-02-13T15:39:26.144448Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:39:26.225928 waagent[1860]: 2025-02-13T15:39:26.224166Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:39:26.254335 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:39:26.258349 waagent[1860]: 2025-02-13T15:39:26.258274Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:39:26.262592 waagent[1860]: 2025-02-13T15:39:26.262459Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:39:26.267886 waagent[1860]: 2025-02-13T15:39:26.267237Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:39:26.273290 waagent[1860]: 2025-02-13T15:39:26.272202Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:39:26.278714 waagent[1860]: 2025-02-13T15:39:26.278352Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:39:26.283310 waagent[1860]: 2025-02-13T15:39:26.282220Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:39:26.317914 waagent[1860]: 2025-02-13T15:39:26.315928Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:39:26.320915 waagent[1860]: 2025-02-13T15:39:26.319711Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:39:26.325501 waagent[1860]: 2025-02-13T15:39:26.322477Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:39:26.363253 systemd[1882]: Queued start job for default target default.target. Feb 13 15:39:26.370452 systemd[1882]: Created slice app.slice - User Application Slice. Feb 13 15:39:26.370492 systemd[1882]: Reached target paths.target - Paths. Feb 13 15:39:26.370509 systemd[1882]: Reached target timers.target - Timers. Feb 13 15:39:26.380017 systemd[1882]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:39:26.390252 systemd[1882]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:39:26.390320 systemd[1882]: Reached target sockets.target - Sockets. Feb 13 15:39:26.390339 systemd[1882]: Reached target basic.target - Basic System. Feb 13 15:39:26.390378 systemd[1882]: Reached target default.target - Main User Target. Feb 13 15:39:26.390407 systemd[1882]: Startup finished in 290ms. Feb 13 15:39:26.390514 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:39:26.398488 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:39:26.399724 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:39:26.501416 waagent[1860]: 2025-02-13T15:39:26.500606Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:39:26.503739 waagent[1860]: 2025-02-13T15:39:26.503649Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:39:26.513297 waagent[1860]: 2025-02-13T15:39:26.512578Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:39:26.530177 waagent[1860]: 2025-02-13T15:39:26.530124Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:39:26.533804 waagent[1860]: 2025-02-13T15:39:26.533667Z INFO Daemon Feb 13 15:39:26.536235 waagent[1860]: 2025-02-13T15:39:26.535551Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 84b692bc-8f9f-4906-bfe7-ad1b80c2dd54 eTag: 5466184864976828164 source: Fabric] Feb 13 15:39:26.541859 waagent[1860]: 2025-02-13T15:39:26.541722Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:39:26.545806 waagent[1860]: 2025-02-13T15:39:26.545591Z INFO Daemon Feb 13 15:39:26.548871 waagent[1860]: 2025-02-13T15:39:26.546997Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:39:26.556393 waagent[1860]: 2025-02-13T15:39:26.556348Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:39:26.655070 waagent[1860]: 2025-02-13T15:39:26.654884Z INFO Daemon Downloaded certificate {'thumbprint': 'EF7F6A15D1C0AF66DA10950DF571BE997DE8F102', 'hasPrivateKey': False} Feb 13 15:39:26.661662 waagent[1860]: 2025-02-13T15:39:26.661598Z INFO Daemon Downloaded certificate {'thumbprint': 'CAD0D713841DD47634B437AF08E222A66BE3F0CA', 'hasPrivateKey': True} Feb 13 15:39:26.666980 waagent[1860]: 2025-02-13T15:39:26.666840Z INFO Daemon Fetch goal state completed Feb 13 15:39:26.678197 waagent[1860]: 2025-02-13T15:39:26.677368Z INFO Daemon Daemon Starting provisioning Feb 13 15:39:26.680105 waagent[1860]: 2025-02-13T15:39:26.680026Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:39:26.682313 waagent[1860]: 2025-02-13T15:39:26.682177Z INFO Daemon Daemon Set hostname [ci-4152.2.1-a-a4d4c6cb32] Feb 13 15:39:26.690725 waagent[1860]: 2025-02-13T15:39:26.690436Z INFO Daemon Daemon Publish hostname [ci-4152.2.1-a-a4d4c6cb32] Feb 13 15:39:26.694086 waagent[1860]: 2025-02-13T15:39:26.693518Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:39:26.696919 waagent[1860]: 2025-02-13T15:39:26.696664Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:39:26.715390 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:26.715400 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:39:26.715455 systemd-networkd[1476]: eth0: DHCP lease lost Feb 13 15:39:26.717988 waagent[1860]: 2025-02-13T15:39:26.717355Z INFO Daemon Daemon Create user account if not exists Feb 13 15:39:26.721068 waagent[1860]: 2025-02-13T15:39:26.720397Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:39:26.723312 waagent[1860]: 2025-02-13T15:39:26.723235Z INFO Daemon Daemon Configure sudoer Feb 13 15:39:26.725148 systemd-networkd[1476]: eth0: DHCPv6 lease lost Feb 13 15:39:26.728121 waagent[1860]: 2025-02-13T15:39:26.725728Z INFO Daemon Daemon Configure sshd Feb 13 15:39:26.728167 kubelet[1872]: E0213 15:39:26.728083 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:26.729681 waagent[1860]: 2025-02-13T15:39:26.729622Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:39:26.737002 waagent[1860]: 2025-02-13T15:39:26.730616Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:39:26.741318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:26.741503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:26.763970 systemd-networkd[1476]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:39:36.992098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:39:37.000487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:37.681693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:37.692276 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:37.759437 kubelet[1952]: E0213 15:39:37.759368 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:37.764158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:37.764352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:47.716680 chronyd[1716]: Selected source PHC0 Feb 13 15:39:47.875776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:39:47.881140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:48.234295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:48.249261 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:48.521318 kubelet[1968]: E0213 15:39:48.521185 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:48.524097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:48.524297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:56.809000 waagent[1860]: 2025-02-13T15:39:56.808924Z INFO Daemon Daemon Provisioning complete Feb 13 15:39:56.823160 waagent[1860]: 2025-02-13T15:39:56.823087Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:39:56.830412 waagent[1860]: 2025-02-13T15:39:56.824766Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:39:56.830412 waagent[1860]: 2025-02-13T15:39:56.825715Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:39:56.955819 waagent[1976]: 2025-02-13T15:39:56.955704Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:39:56.956316 waagent[1976]: 2025-02-13T15:39:56.955898Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.1 Feb 13 15:39:56.956316 waagent[1976]: 2025-02-13T15:39:56.956011Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 15:39:56.972627 waagent[1976]: 2025-02-13T15:39:56.972539Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:39:56.972846 waagent[1976]: 2025-02-13T15:39:56.972795Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:39:56.972963 waagent[1976]: 2025-02-13T15:39:56.972898Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:39:56.981229 waagent[1976]: 2025-02-13T15:39:56.981168Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:39:56.991444 waagent[1976]: 2025-02-13T15:39:56.991395Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:39:56.991953 waagent[1976]: 2025-02-13T15:39:56.991885Z INFO ExtHandler Feb 13 15:39:56.992043 waagent[1976]: 2025-02-13T15:39:56.992008Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 961aaeaa-c77b-48e0-a55b-da6977b09888 eTag: 5466184864976828164 source: Fabric] Feb 13 15:39:56.992374 waagent[1976]: 2025-02-13T15:39:56.992326Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:39:56.992981 waagent[1976]: 2025-02-13T15:39:56.992928Z INFO ExtHandler Feb 13 15:39:56.993057 waagent[1976]: 2025-02-13T15:39:56.993026Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:39:56.996974 waagent[1976]: 2025-02-13T15:39:56.996932Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:39:57.080698 waagent[1976]: 2025-02-13T15:39:57.080540Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EF7F6A15D1C0AF66DA10950DF571BE997DE8F102', 'hasPrivateKey': False} Feb 13 15:39:57.081133 waagent[1976]: 2025-02-13T15:39:57.081075Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CAD0D713841DD47634B437AF08E222A66BE3F0CA', 'hasPrivateKey': True} Feb 13 15:39:57.081595 waagent[1976]: 2025-02-13T15:39:57.081544Z INFO ExtHandler Fetch goal state completed Feb 13 15:39:57.096676 waagent[1976]: 2025-02-13T15:39:57.096610Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1976 Feb 13 15:39:57.096840 waagent[1976]: 2025-02-13T15:39:57.096792Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:39:57.098465 waagent[1976]: 2025-02-13T15:39:57.098407Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:39:57.098865 waagent[1976]: 2025-02-13T15:39:57.098814Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:39:57.108800 waagent[1976]: 2025-02-13T15:39:57.108756Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:39:57.109035 waagent[1976]: 2025-02-13T15:39:57.108988Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:39:57.116021 waagent[1976]: 2025-02-13T15:39:57.115974Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:39:57.123423 systemd[1]: Reloading requested from client PID 1991 ('systemctl') (unit waagent.service)... Feb 13 15:39:57.123440 systemd[1]: Reloading... Feb 13 15:39:57.220938 zram_generator::config[2028]: No configuration found. Feb 13 15:39:57.335997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:57.419864 systemd[1]: Reloading finished in 295 ms. Feb 13 15:39:57.449991 waagent[1976]: 2025-02-13T15:39:57.445861Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:39:57.455664 systemd[1]: Reloading requested from client PID 2082 ('systemctl') (unit waagent.service)... Feb 13 15:39:57.455681 systemd[1]: Reloading... Feb 13 15:39:57.546945 zram_generator::config[2119]: No configuration found. Feb 13 15:39:57.665758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:57.749332 systemd[1]: Reloading finished in 293 ms. Feb 13 15:39:57.779923 waagent[1976]: 2025-02-13T15:39:57.777491Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:39:57.779923 waagent[1976]: 2025-02-13T15:39:57.777749Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:39:57.894637 waagent[1976]: 2025-02-13T15:39:57.894535Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:39:57.895324 waagent[1976]: 2025-02-13T15:39:57.895256Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:39:57.896187 waagent[1976]: 2025-02-13T15:39:57.896127Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:39:57.896323 waagent[1976]: 2025-02-13T15:39:57.896274Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:39:57.896787 waagent[1976]: 2025-02-13T15:39:57.896730Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:39:57.897028 waagent[1976]: 2025-02-13T15:39:57.896898Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:39:57.897028 waagent[1976]: 2025-02-13T15:39:57.896976Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:39:57.897144 waagent[1976]: 2025-02-13T15:39:57.897090Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:39:57.897320 waagent[1976]: 2025-02-13T15:39:57.897267Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:39:57.897488 waagent[1976]: 2025-02-13T15:39:57.897441Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:39:57.897792 waagent[1976]: 2025-02-13T15:39:57.897741Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:39:57.897970 waagent[1976]: 2025-02-13T15:39:57.897889Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:39:57.898777 waagent[1976]: 2025-02-13T15:39:57.898725Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:39:57.899103 waagent[1976]: 2025-02-13T15:39:57.899040Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:39:57.899278 waagent[1976]: 2025-02-13T15:39:57.899222Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:39:57.899278 waagent[1976]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:39:57.899278 waagent[1976]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:39:57.899278 waagent[1976]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:39:57.899278 waagent[1976]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:39:57.899278 waagent[1976]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:39:57.899278 waagent[1976]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:39:57.899688 waagent[1976]: 2025-02-13T15:39:57.899522Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:39:57.899872 waagent[1976]: 2025-02-13T15:39:57.899831Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:39:57.900552 waagent[1976]: 2025-02-13T15:39:57.900506Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:39:57.907924 waagent[1976]: 2025-02-13T15:39:57.907249Z INFO ExtHandler ExtHandler Feb 13 15:39:57.907924 waagent[1976]: 2025-02-13T15:39:57.907361Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b09f3eac-5ee6-47af-bf44-2ab1856120a9 correlation 945a21d2-748e-4cc9-a287-3789cc7c7bb0 created: 2025-02-13T15:38:44.710899Z] Feb 13 15:39:57.907924 waagent[1976]: 2025-02-13T15:39:57.907862Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:39:57.909443 waagent[1976]: 2025-02-13T15:39:57.909384Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 13 15:39:57.924119 waagent[1976]: 2025-02-13T15:39:57.923190Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:39:57.924119 waagent[1976]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:39:57.924119 waagent[1976]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:39:57.924119 waagent[1976]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:b7:e2 brd ff:ff:ff:ff:ff:ff Feb 13 15:39:57.924119 waagent[1976]: 3: enP6143s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:b7:e2 brd ff:ff:ff:ff:ff:ff\ altname enP6143p0s2 Feb 13 15:39:57.924119 waagent[1976]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:39:57.924119 waagent[1976]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:39:57.924119 waagent[1976]: 2: eth0 inet 10.200.8.18/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:39:57.924119 waagent[1976]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:39:57.924119 waagent[1976]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:39:57.924119 waagent[1976]: 2: eth0 inet6 fe80::6245:bdff:fee0:b7e2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:39:57.924119 waagent[1976]: 3: enP6143s1 inet6 fe80::6245:bdff:fee0:b7e2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:39:57.953680 waagent[1976]: 2025-02-13T15:39:57.953589Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 88ED1B7E-3B0C-4AF0-B1B4-EF424869E884;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:39:57.967321 waagent[1976]: 2025-02-13T15:39:57.967223Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:39:57.967321 waagent[1976]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.967321 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.967321 waagent[1976]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.967321 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.967321 waagent[1976]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.967321 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.967321 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:39:57.967321 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:39:57.967321 waagent[1976]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:39:57.971005 waagent[1976]: 2025-02-13T15:39:57.970946Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:39:57.971005 waagent[1976]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.971005 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.971005 waagent[1976]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.971005 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.971005 waagent[1976]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:39:57.971005 waagent[1976]: pkts bytes target prot opt in out source destination Feb 13 15:39:57.971005 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:39:57.971005 waagent[1976]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:39:57.971005 waagent[1976]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:39:57.971392 waagent[1976]: 2025-02-13T15:39:57.971268Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:39:58.625541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:39:58.633146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:58.989760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:59.000259 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:59.338855 kubelet[2215]: E0213 15:39:59.338687 2215 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:59.341808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:59.342047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:09.375650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:40:09.382146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:09.587607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:09.599274 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:09.706266 update_engine[1719]: I20250213 15:40:09.706050 1719 update_attempter.cc:509] Updating boot flags... Feb 13 15:40:09.976663 kubelet[2232]: E0213 15:40:09.976486 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:09.979743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:09.980000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:10.063010 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2256) Feb 13 15:40:10.409590 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 15:40:20.125528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:40:20.135146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:20.598837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:20.609231 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:20.874770 kubelet[2312]: E0213 15:40:20.874621 2312 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:20.877793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:20.878039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:21.985403 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:40:21.986699 systemd[1]: Started sshd@0-10.200.8.18:22-10.200.16.10:33896.service - OpenSSH per-connection server daemon (10.200.16.10:33896). Feb 13 15:40:24.576695 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 33896 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:24.578459 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:24.584138 systemd-logind[1717]: New session 3 of user core. Feb 13 15:40:24.591095 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:40:25.137142 systemd[1]: Started sshd@1-10.200.8.18:22-10.200.16.10:33902.service - OpenSSH per-connection server daemon (10.200.16.10:33902). Feb 13 15:40:25.835134 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 33902 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:25.836587 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:25.840651 systemd-logind[1717]: New session 4 of user core. Feb 13 15:40:25.848076 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:40:26.473177 sshd[2328]: Connection closed by 10.200.16.10 port 33902 Feb 13 15:40:26.474095 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:26.477253 systemd[1]: sshd@1-10.200.8.18:22-10.200.16.10:33902.service: Deactivated successfully. Feb 13 15:40:26.479239 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:40:26.480699 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:40:26.481779 systemd-logind[1717]: Removed session 4. Feb 13 15:40:26.665288 systemd[1]: Started sshd@2-10.200.8.18:22-10.200.16.10:33914.service - OpenSSH per-connection server daemon (10.200.16.10:33914). Feb 13 15:40:27.327628 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 33914 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:27.330053 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:27.335418 systemd-logind[1717]: New session 5 of user core. Feb 13 15:40:27.342074 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:40:27.781583 sshd[2335]: Connection closed by 10.200.16.10 port 33914 Feb 13 15:40:27.782629 sshd-session[2333]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:27.786510 systemd[1]: sshd@2-10.200.8.18:22-10.200.16.10:33914.service: Deactivated successfully. Feb 13 15:40:27.788406 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:40:27.789200 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:40:27.790166 systemd-logind[1717]: Removed session 5. Feb 13 15:40:27.900257 systemd[1]: Started sshd@3-10.200.8.18:22-10.200.16.10:33920.service - OpenSSH per-connection server daemon (10.200.16.10:33920). Feb 13 15:40:28.537358 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 33920 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:28.539091 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:28.544309 systemd-logind[1717]: New session 6 of user core. Feb 13 15:40:28.552071 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:40:28.994856 sshd[2342]: Connection closed by 10.200.16.10 port 33920 Feb 13 15:40:28.995747 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:29.000181 systemd[1]: sshd@3-10.200.8.18:22-10.200.16.10:33920.service: Deactivated successfully. Feb 13 15:40:29.002022 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:40:29.002684 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:40:29.003590 systemd-logind[1717]: Removed session 6. Feb 13 15:40:29.114485 systemd[1]: Started sshd@4-10.200.8.18:22-10.200.16.10:52130.service - OpenSSH per-connection server daemon (10.200.16.10:52130). Feb 13 15:40:29.833960 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 52130 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:29.835677 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:29.840137 systemd-logind[1717]: New session 7 of user core. Feb 13 15:40:29.847051 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:40:30.215281 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:40:30.215651 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:40:30.231490 sudo[2350]: pam_unix(sudo:session): session closed for user root Feb 13 15:40:30.333926 sshd[2349]: Connection closed by 10.200.16.10 port 52130 Feb 13 15:40:30.335224 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:30.338788 systemd[1]: sshd@4-10.200.8.18:22-10.200.16.10:52130.service: Deactivated successfully. Feb 13 15:40:30.340801 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:40:30.342264 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:40:30.343465 systemd-logind[1717]: Removed session 7. Feb 13 15:40:30.447275 systemd[1]: Started sshd@5-10.200.8.18:22-10.200.16.10:52138.service - OpenSSH per-connection server daemon (10.200.16.10:52138). Feb 13 15:40:30.980237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:40:30.986141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:31.101971 sshd[2355]: Accepted publickey for core from 10.200.16.10 port 52138 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:31.103632 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:31.108776 systemd-logind[1717]: New session 8 of user core. Feb 13 15:40:31.113343 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:40:31.344556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:31.349462 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:31.452442 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:40:31.452799 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:40:31.456076 sudo[2373]: pam_unix(sudo:session): session closed for user root Feb 13 15:40:31.461034 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:40:31.461376 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:40:31.479322 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:40:31.505078 augenrules[2395]: No rules Feb 13 15:40:31.506507 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:40:31.506798 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:40:31.508770 sudo[2372]: pam_unix(sudo:session): session closed for user root Feb 13 15:40:31.573543 kubelet[2366]: E0213 15:40:31.573479 2366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:31.576941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:31.577123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:31.654105 sshd[2360]: Connection closed by 10.200.16.10 port 52138 Feb 13 15:40:31.654947 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:31.659039 systemd[1]: sshd@5-10.200.8.18:22-10.200.16.10:52138.service: Deactivated successfully. Feb 13 15:40:31.660769 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:40:31.661501 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:40:31.662502 systemd-logind[1717]: Removed session 8. Feb 13 15:40:31.785001 systemd[1]: Started sshd@6-10.200.8.18:22-10.200.16.10:52140.service - OpenSSH per-connection server daemon (10.200.16.10:52140). Feb 13 15:40:32.425761 sshd[2405]: Accepted publickey for core from 10.200.16.10 port 52140 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:40:32.428661 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:32.434400 systemd-logind[1717]: New session 9 of user core. Feb 13 15:40:32.444075 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:40:32.774955 sudo[2408]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:40:32.775322 sudo[2408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:40:33.441955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:33.455207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:33.483632 systemd[1]: Reloading requested from client PID 2446 ('systemctl') (unit session-9.scope)... Feb 13 15:40:33.483649 systemd[1]: Reloading... Feb 13 15:40:33.589938 zram_generator::config[2485]: No configuration found. Feb 13 15:40:33.724132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:40:33.815872 systemd[1]: Reloading finished in 331 ms. Feb 13 15:40:33.862384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:33.868014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:33.870118 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:40:33.870358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:33.876346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:34.080548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:34.086853 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:40:34.706597 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:40:34.706597 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:40:34.706597 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:40:34.707189 kubelet[2557]: I0213 15:40:34.706658 2557 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:40:34.994479 kubelet[2557]: I0213 15:40:34.994354 2557 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:40:34.994479 kubelet[2557]: I0213 15:40:34.994386 2557 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:40:34.995026 kubelet[2557]: I0213 15:40:34.994703 2557 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:40:35.013998 kubelet[2557]: I0213 15:40:35.013298 2557 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:40:35.028060 kubelet[2557]: I0213 15:40:35.028028 2557 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:40:35.028400 kubelet[2557]: I0213 15:40:35.028289 2557 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:40:35.028534 kubelet[2557]: I0213 15:40:35.028505 2557 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:40:35.029349 kubelet[2557]: I0213 15:40:35.029288 2557 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:40:35.029349 kubelet[2557]: I0213 15:40:35.029321 2557 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:40:35.029485 kubelet[2557]: I0213 15:40:35.029459 2557 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:40:35.029596 kubelet[2557]: I0213 15:40:35.029582 2557 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:40:35.029652 kubelet[2557]: I0213 15:40:35.029604 2557 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:40:35.029652 kubelet[2557]: I0213 15:40:35.029639 2557 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:40:35.029720 kubelet[2557]: I0213 15:40:35.029660 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:40:35.030480 kubelet[2557]: E0213 15:40:35.030163 2557 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:35.030480 kubelet[2557]: E0213 15:40:35.030425 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:35.031037 kubelet[2557]: I0213 15:40:35.031014 2557 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:40:35.034633 kubelet[2557]: I0213 15:40:35.034285 2557 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:40:35.034633 kubelet[2557]: W0213 15:40:35.034357 2557 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:40:35.035130 kubelet[2557]: I0213 15:40:35.035024 2557 server.go:1256] "Started kubelet" Feb 13 15:40:35.036675 kubelet[2557]: I0213 15:40:35.036325 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:40:35.042602 kubelet[2557]: W0213 15:40:35.040985 2557 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:40:35.042602 kubelet[2557]: E0213 15:40:35.041025 2557 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:40:35.042602 kubelet[2557]: W0213 15:40:35.041199 2557 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.8.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:40:35.042602 kubelet[2557]: E0213 15:40:35.041221 2557 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:40:35.045138 kubelet[2557]: I0213 15:40:35.045113 2557 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:40:35.046208 kubelet[2557]: I0213 15:40:35.045943 2557 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:40:35.047930 kubelet[2557]: E0213 15:40:35.047290 2557 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.18.1823cec6f6d2b643 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.18,UID:10.200.8.18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.18,},FirstTimestamp:2025-02-13 15:40:35.034994243 +0000 UTC m=+0.940966293,LastTimestamp:2025-02-13 15:40:35.034994243 +0000 UTC m=+0.940966293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.18,}" Feb 13 15:40:35.047930 kubelet[2557]: I0213 15:40:35.047348 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:40:35.047930 kubelet[2557]: I0213 15:40:35.047583 2557 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:40:35.050269 kubelet[2557]: I0213 15:40:35.049542 2557 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:40:35.050269 kubelet[2557]: I0213 15:40:35.049706 2557 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:40:35.050269 kubelet[2557]: I0213 15:40:35.049765 2557 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:40:35.051952 kubelet[2557]: E0213 15:40:35.051916 2557 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:40:35.052091 kubelet[2557]: I0213 15:40:35.052064 2557 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:40:35.052197 kubelet[2557]: I0213 15:40:35.052173 2557 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:40:35.054670 kubelet[2557]: W0213 15:40:35.054649 2557 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:40:35.054812 kubelet[2557]: E0213 15:40:35.054799 2557 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:40:35.056646 kubelet[2557]: I0213 15:40:35.056623 2557 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:40:35.076700 kubelet[2557]: I0213 15:40:35.076545 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:40:35.078797 kubelet[2557]: I0213 15:40:35.078769 2557 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:40:35.078797 kubelet[2557]: I0213 15:40:35.078794 2557 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:40:35.078972 kubelet[2557]: I0213 15:40:35.078811 2557 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:40:35.080385 kubelet[2557]: I0213 15:40:35.080150 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:40:35.080385 kubelet[2557]: I0213 15:40:35.080180 2557 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:40:35.080385 kubelet[2557]: I0213 15:40:35.080200 2557 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:40:35.080385 kubelet[2557]: E0213 15:40:35.080312 2557 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:40:35.080594 kubelet[2557]: E0213 15:40:35.080477 2557 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.18\" not found" node="10.200.8.18" Feb 13 15:40:35.088569 kubelet[2557]: I0213 15:40:35.088543 2557 policy_none.go:49] "None policy: Start" Feb 13 15:40:35.089202 kubelet[2557]: I0213 15:40:35.089181 2557 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:40:35.089202 kubelet[2557]: I0213 15:40:35.089207 2557 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:40:35.103121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:40:35.118005 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:40:35.121036 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:40:35.129730 kubelet[2557]: I0213 15:40:35.129699 2557 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:40:35.130441 kubelet[2557]: I0213 15:40:35.130334 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:40:35.132660 kubelet[2557]: E0213 15:40:35.132335 2557 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.18\" not found" Feb 13 15:40:35.151783 kubelet[2557]: I0213 15:40:35.151303 2557 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.18" Feb 13 15:40:35.159363 kubelet[2557]: I0213 15:40:35.159329 2557 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.18" Feb 13 15:40:35.174877 kubelet[2557]: E0213 15:40:35.174840 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.275539 kubelet[2557]: E0213 15:40:35.275359 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.376114 kubelet[2557]: E0213 15:40:35.376048 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.476791 kubelet[2557]: E0213 15:40:35.476731 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.577882 kubelet[2557]: E0213 15:40:35.577715 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.609386 sudo[2408]: pam_unix(sudo:session): session closed for user root Feb 13 15:40:35.678518 kubelet[2557]: E0213 15:40:35.678449 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.715926 sshd[2407]: Connection closed by 10.200.16.10 port 52140 Feb 13 15:40:35.716776 sshd-session[2405]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:35.720413 systemd[1]: sshd@6-10.200.8.18:22-10.200.16.10:52140.service: Deactivated successfully. Feb 13 15:40:35.723060 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:40:35.724762 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:40:35.726088 systemd-logind[1717]: Removed session 9. Feb 13 15:40:35.779110 kubelet[2557]: E0213 15:40:35.779053 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.879948 kubelet[2557]: E0213 15:40:35.879888 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.18\" not found" Feb 13 15:40:35.981138 kubelet[2557]: I0213 15:40:35.981102 2557 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:40:35.981671 containerd[1726]: time="2025-02-13T15:40:35.981625003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:40:35.982407 kubelet[2557]: I0213 15:40:35.981975 2557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:40:35.997103 kubelet[2557]: I0213 15:40:35.997045 2557 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:40:35.997340 kubelet[2557]: W0213 15:40:35.997268 2557 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:40:35.997398 kubelet[2557]: W0213 15:40:35.997271 2557 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:40:35.997398 kubelet[2557]: W0213 15:40:35.997292 2557 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:40:35.997398 kubelet[2557]: W0213 15:40:35.997314 2557 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:40:36.030677 kubelet[2557]: I0213 15:40:36.030545 2557 apiserver.go:52] "Watching apiserver" Feb 13 15:40:36.030677 kubelet[2557]: E0213 15:40:36.030588 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:36.041057 kubelet[2557]: I0213 15:40:36.041018 2557 topology_manager.go:215] "Topology Admit Handler" podUID="dfb53059-2a78-4c47-8e41-134006fedfef" podNamespace="calico-system" podName="calico-node-hzzdf" Feb 13 15:40:36.041549 kubelet[2557]: I0213 15:40:36.041198 2557 topology_manager.go:215] "Topology Admit Handler" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" podNamespace="calico-system" podName="csi-node-driver-wl2j2" Feb 13 15:40:36.041549 kubelet[2557]: I0213 15:40:36.041288 2557 topology_manager.go:215] "Topology Admit Handler" podUID="fad40d8c-8728-474d-999a-02bcbaa56762" podNamespace="kube-system" podName="kube-proxy-rr46t" Feb 13 15:40:36.042010 kubelet[2557]: E0213 15:40:36.041878 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:36.052741 kubelet[2557]: I0213 15:40:36.052536 2557 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:40:36.053353 systemd[1]: Created slice kubepods-besteffort-podfad40d8c_8728_474d_999a_02bcbaa56762.slice - libcontainer container kubepods-besteffort-podfad40d8c_8728_474d_999a_02bcbaa56762.slice. Feb 13 15:40:36.056223 kubelet[2557]: I0213 15:40:36.056201 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dfb53059-2a78-4c47-8e41-134006fedfef-node-certs\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.056570 kubelet[2557]: I0213 15:40:36.056240 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-var-run-calico\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.056570 kubelet[2557]: I0213 15:40:36.056269 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-cni-bin-dir\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.056570 kubelet[2557]: I0213 15:40:36.056304 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/357ee354-ebda-4e13-a2f3-9c1549b2abf5-varrun\") pod \"csi-node-driver-wl2j2\" (UID: \"357ee354-ebda-4e13-a2f3-9c1549b2abf5\") " pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:36.056570 kubelet[2557]: I0213 15:40:36.056328 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/357ee354-ebda-4e13-a2f3-9c1549b2abf5-registration-dir\") pod \"csi-node-driver-wl2j2\" (UID: \"357ee354-ebda-4e13-a2f3-9c1549b2abf5\") " pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:36.056570 kubelet[2557]: I0213 15:40:36.056352 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ml7v\" (UniqueName: \"kubernetes.io/projected/fad40d8c-8728-474d-999a-02bcbaa56762-kube-api-access-9ml7v\") pod \"kube-proxy-rr46t\" (UID: \"fad40d8c-8728-474d-999a-02bcbaa56762\") " pod="kube-system/kube-proxy-rr46t" Feb 13 15:40:36.056778 kubelet[2557]: I0213 15:40:36.056378 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-xtables-lock\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.056778 kubelet[2557]: I0213 15:40:36.056400 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad40d8c-8728-474d-999a-02bcbaa56762-xtables-lock\") pod \"kube-proxy-rr46t\" (UID: \"fad40d8c-8728-474d-999a-02bcbaa56762\") " pod="kube-system/kube-proxy-rr46t" Feb 13 15:40:36.056778 kubelet[2557]: I0213 15:40:36.056417 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb53059-2a78-4c47-8e41-134006fedfef-tigera-ca-bundle\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.056778 kubelet[2557]: I0213 15:40:36.056435 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ee354-ebda-4e13-a2f3-9c1549b2abf5-kubelet-dir\") pod \"csi-node-driver-wl2j2\" (UID: \"357ee354-ebda-4e13-a2f3-9c1549b2abf5\") " pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:36.056778 kubelet[2557]: I0213 15:40:36.056459 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/357ee354-ebda-4e13-a2f3-9c1549b2abf5-socket-dir\") pod \"csi-node-driver-wl2j2\" (UID: \"357ee354-ebda-4e13-a2f3-9c1549b2abf5\") " pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:36.057064 kubelet[2557]: I0213 15:40:36.056493 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fad40d8c-8728-474d-999a-02bcbaa56762-kube-proxy\") pod \"kube-proxy-rr46t\" (UID: \"fad40d8c-8728-474d-999a-02bcbaa56762\") " pod="kube-system/kube-proxy-rr46t" Feb 13 15:40:36.057064 kubelet[2557]: I0213 15:40:36.056534 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-flexvol-driver-host\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057064 kubelet[2557]: I0213 15:40:36.056563 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-policysync\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057064 kubelet[2557]: I0213 15:40:36.056595 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-var-lib-calico\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057064 kubelet[2557]: I0213 15:40:36.056648 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-cni-net-dir\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057211 kubelet[2557]: I0213 15:40:36.056678 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-cni-log-dir\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057211 kubelet[2557]: I0213 15:40:36.056708 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwcl\" (UniqueName: \"kubernetes.io/projected/dfb53059-2a78-4c47-8e41-134006fedfef-kube-api-access-6pwcl\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.057211 kubelet[2557]: I0213 15:40:36.056764 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjmp\" (UniqueName: \"kubernetes.io/projected/357ee354-ebda-4e13-a2f3-9c1549b2abf5-kube-api-access-zfjmp\") pod \"csi-node-driver-wl2j2\" (UID: \"357ee354-ebda-4e13-a2f3-9c1549b2abf5\") " pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:36.057211 kubelet[2557]: I0213 15:40:36.056792 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad40d8c-8728-474d-999a-02bcbaa56762-lib-modules\") pod \"kube-proxy-rr46t\" (UID: \"fad40d8c-8728-474d-999a-02bcbaa56762\") " pod="kube-system/kube-proxy-rr46t" Feb 13 15:40:36.057211 kubelet[2557]: I0213 15:40:36.056823 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb53059-2a78-4c47-8e41-134006fedfef-lib-modules\") pod \"calico-node-hzzdf\" (UID: \"dfb53059-2a78-4c47-8e41-134006fedfef\") " pod="calico-system/calico-node-hzzdf" Feb 13 15:40:36.065425 systemd[1]: Created slice kubepods-besteffort-poddfb53059_2a78_4c47_8e41_134006fedfef.slice - libcontainer container kubepods-besteffort-poddfb53059_2a78_4c47_8e41_134006fedfef.slice. Feb 13 15:40:36.162028 kubelet[2557]: E0213 15:40:36.161638 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.162028 kubelet[2557]: W0213 15:40:36.161670 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.162028 kubelet[2557]: E0213 15:40:36.161699 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164034 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.164933 kubelet[2557]: W0213 15:40:36.164056 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164102 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164368 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.164933 kubelet[2557]: W0213 15:40:36.164379 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164406 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164653 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.164933 kubelet[2557]: W0213 15:40:36.164672 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.164933 kubelet[2557]: E0213 15:40:36.164690 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.166647 kubelet[2557]: E0213 15:40:36.166629 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.166771 kubelet[2557]: W0213 15:40:36.166756 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.166851 kubelet[2557]: E0213 15:40:36.166841 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.167147 kubelet[2557]: E0213 15:40:36.167131 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.167257 kubelet[2557]: W0213 15:40:36.167243 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.167343 kubelet[2557]: E0213 15:40:36.167332 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.167611 kubelet[2557]: E0213 15:40:36.167597 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.167875 kubelet[2557]: W0213 15:40:36.167696 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.167875 kubelet[2557]: E0213 15:40:36.167718 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.168132 kubelet[2557]: E0213 15:40:36.168119 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.168215 kubelet[2557]: W0213 15:40:36.168196 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.168289 kubelet[2557]: E0213 15:40:36.168280 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.168551 kubelet[2557]: E0213 15:40:36.168537 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.168736 kubelet[2557]: W0213 15:40:36.168641 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.168736 kubelet[2557]: E0213 15:40:36.168663 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.169071 kubelet[2557]: E0213 15:40:36.168964 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.169071 kubelet[2557]: W0213 15:40:36.168978 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.169071 kubelet[2557]: E0213 15:40:36.168997 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.169356 kubelet[2557]: E0213 15:40:36.169342 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.169607 kubelet[2557]: W0213 15:40:36.169457 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.169607 kubelet[2557]: E0213 15:40:36.169479 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.169841 kubelet[2557]: E0213 15:40:36.169828 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.169925 kubelet[2557]: W0213 15:40:36.169913 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.170103 kubelet[2557]: E0213 15:40:36.169993 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.170293 kubelet[2557]: E0213 15:40:36.170280 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.170469 kubelet[2557]: W0213 15:40:36.170366 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.170469 kubelet[2557]: E0213 15:40:36.170390 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.170783 kubelet[2557]: E0213 15:40:36.170678 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.170783 kubelet[2557]: W0213 15:40:36.170691 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.170783 kubelet[2557]: E0213 15:40:36.170708 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.171225 kubelet[2557]: E0213 15:40:36.171055 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.171225 kubelet[2557]: W0213 15:40:36.171069 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.171225 kubelet[2557]: E0213 15:40:36.171087 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.172945 kubelet[2557]: E0213 15:40:36.171507 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.172945 kubelet[2557]: W0213 15:40:36.171521 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.172945 kubelet[2557]: E0213 15:40:36.171548 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.173406 kubelet[2557]: E0213 15:40:36.173386 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.173496 kubelet[2557]: W0213 15:40:36.173484 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.173569 kubelet[2557]: E0213 15:40:36.173560 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.173809 kubelet[2557]: E0213 15:40:36.173797 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.173889 kubelet[2557]: W0213 15:40:36.173878 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.174004 kubelet[2557]: E0213 15:40:36.173994 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.174244 kubelet[2557]: E0213 15:40:36.174232 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.174328 kubelet[2557]: W0213 15:40:36.174318 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.174398 kubelet[2557]: E0213 15:40:36.174391 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.174669 kubelet[2557]: E0213 15:40:36.174658 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.174749 kubelet[2557]: W0213 15:40:36.174739 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.174817 kubelet[2557]: E0213 15:40:36.174809 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.180468 kubelet[2557]: E0213 15:40:36.180453 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.180577 kubelet[2557]: W0213 15:40:36.180562 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.180662 kubelet[2557]: E0213 15:40:36.180652 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.183701 kubelet[2557]: E0213 15:40:36.183679 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.183701 kubelet[2557]: W0213 15:40:36.183698 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.192032 kubelet[2557]: E0213 15:40:36.192003 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.192120 kubelet[2557]: W0213 15:40:36.192038 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.192120 kubelet[2557]: E0213 15:40:36.192061 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.193262 kubelet[2557]: E0213 15:40:36.193244 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.193331 kubelet[2557]: W0213 15:40:36.193277 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.193331 kubelet[2557]: E0213 15:40:36.193297 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195299 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.195929 kubelet[2557]: W0213 15:40:36.195314 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195331 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195588 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.195929 kubelet[2557]: W0213 15:40:36.195598 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195614 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195828 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.195929 kubelet[2557]: W0213 15:40:36.195837 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.195929 kubelet[2557]: E0213 15:40:36.195853 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.198089 kubelet[2557]: E0213 15:40:36.196619 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:40:36.198089 kubelet[2557]: W0213 15:40:36.196633 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:40:36.198089 kubelet[2557]: E0213 15:40:36.196652 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.198089 kubelet[2557]: E0213 15:40:36.196695 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:40:36.364195 containerd[1726]: time="2025-02-13T15:40:36.364143674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rr46t,Uid:fad40d8c-8728-474d-999a-02bcbaa56762,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:36.368806 containerd[1726]: time="2025-02-13T15:40:36.368763800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzzdf,Uid:dfb53059-2a78-4c47-8e41-134006fedfef,Namespace:calico-system,Attempt:0,}" Feb 13 15:40:36.972858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223651064.mount: Deactivated successfully. Feb 13 15:40:37.001624 containerd[1726]: time="2025-02-13T15:40:37.001557122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:37.012005 containerd[1726]: time="2025-02-13T15:40:37.011895405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 15:40:37.014483 containerd[1726]: time="2025-02-13T15:40:37.014439274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:37.018683 containerd[1726]: time="2025-02-13T15:40:37.018639989Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:37.020152 containerd[1726]: time="2025-02-13T15:40:37.020099229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:40:37.024621 containerd[1726]: time="2025-02-13T15:40:37.024566851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:37.025969 containerd[1726]: time="2025-02-13T15:40:37.025377474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.082996ms" Feb 13 15:40:37.029145 containerd[1726]: time="2025-02-13T15:40:37.029111376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.221173ms" Feb 13 15:40:37.031618 kubelet[2557]: E0213 15:40:37.031588 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:37.237125 containerd[1726]: time="2025-02-13T15:40:37.236742659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:37.238420 containerd[1726]: time="2025-02-13T15:40:37.237805688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:37.238420 containerd[1726]: time="2025-02-13T15:40:37.237849190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:37.238420 containerd[1726]: time="2025-02-13T15:40:37.238016094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:37.239595 containerd[1726]: time="2025-02-13T15:40:37.235503425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:37.239595 containerd[1726]: time="2025-02-13T15:40:37.239358831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:37.239595 containerd[1726]: time="2025-02-13T15:40:37.239377431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:37.239595 containerd[1726]: time="2025-02-13T15:40:37.239456434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:37.361768 systemd[1]: run-containerd-runc-k8s.io-692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76-runc.UfsMra.mount: Deactivated successfully. Feb 13 15:40:37.372082 systemd[1]: Started cri-containerd-17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098.scope - libcontainer container 17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098. Feb 13 15:40:37.375886 systemd[1]: Started cri-containerd-692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76.scope - libcontainer container 692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76. Feb 13 15:40:37.413001 containerd[1726]: time="2025-02-13T15:40:37.412871680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzzdf,Uid:dfb53059-2a78-4c47-8e41-134006fedfef,Namespace:calico-system,Attempt:0,} returns sandbox id \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\"" Feb 13 15:40:37.416882 containerd[1726]: time="2025-02-13T15:40:37.416696485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:40:37.419010 containerd[1726]: time="2025-02-13T15:40:37.418972747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rr46t,Uid:fad40d8c-8728-474d-999a-02bcbaa56762,Namespace:kube-system,Attempt:0,} returns sandbox id \"692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76\"" Feb 13 15:40:38.032176 kubelet[2557]: E0213 15:40:38.032114 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:38.080730 kubelet[2557]: E0213 15:40:38.080617 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:38.852673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955039405.mount: Deactivated successfully. Feb 13 15:40:38.990413 containerd[1726]: time="2025-02-13T15:40:38.990352961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:38.994223 containerd[1726]: time="2025-02-13T15:40:38.994155565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:40:38.996698 containerd[1726]: time="2025-02-13T15:40:38.996647533Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:39.000684 containerd[1726]: time="2025-02-13T15:40:39.000646643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:39.001747 containerd[1726]: time="2025-02-13T15:40:39.001291760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.584533373s" Feb 13 15:40:39.001747 containerd[1726]: time="2025-02-13T15:40:39.001331061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:40:39.002617 containerd[1726]: time="2025-02-13T15:40:39.002594196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:40:39.003594 containerd[1726]: time="2025-02-13T15:40:39.003563822Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:40:39.032716 kubelet[2557]: E0213 15:40:39.032676 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:39.049557 containerd[1726]: time="2025-02-13T15:40:39.049451578Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7\"" Feb 13 15:40:39.051717 containerd[1726]: time="2025-02-13T15:40:39.051595837Z" level=info msg="StartContainer for \"7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7\"" Feb 13 15:40:39.087062 systemd[1]: Started cri-containerd-7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7.scope - libcontainer container 7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7. Feb 13 15:40:39.124492 containerd[1726]: time="2025-02-13T15:40:39.124365429Z" level=info msg="StartContainer for \"7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7\" returns successfully" Feb 13 15:40:39.134027 systemd[1]: cri-containerd-7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7.scope: Deactivated successfully. Feb 13 15:40:39.358685 containerd[1726]: time="2025-02-13T15:40:39.358587740Z" level=info msg="shim disconnected" id=7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7 namespace=k8s.io Feb 13 15:40:39.358685 containerd[1726]: time="2025-02-13T15:40:39.358669243Z" level=warning msg="cleaning up after shim disconnected" id=7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7 namespace=k8s.io Feb 13 15:40:39.358685 containerd[1726]: time="2025-02-13T15:40:39.358686743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:39.817439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7467f250df03af0ba034a4057a7088cf44976e4704535e1129263d31ea9ab3e7-rootfs.mount: Deactivated successfully. Feb 13 15:40:40.033431 kubelet[2557]: E0213 15:40:40.033349 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:40.081068 kubelet[2557]: E0213 15:40:40.080951 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:40.339889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131401405.mount: Deactivated successfully. Feb 13 15:40:40.800009 containerd[1726]: time="2025-02-13T15:40:40.799951195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:40.803664 containerd[1726]: time="2025-02-13T15:40:40.803606295Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620600" Feb 13 15:40:40.806926 containerd[1726]: time="2025-02-13T15:40:40.806853784Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:40.810661 containerd[1726]: time="2025-02-13T15:40:40.810608786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:40.811451 containerd[1726]: time="2025-02-13T15:40:40.811246804Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.808542205s" Feb 13 15:40:40.811451 containerd[1726]: time="2025-02-13T15:40:40.811281905Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:40:40.812485 containerd[1726]: time="2025-02-13T15:40:40.812302333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:40:40.813262 containerd[1726]: time="2025-02-13T15:40:40.813235258Z" level=info msg="CreateContainer within sandbox \"692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:40:40.893243 containerd[1726]: time="2025-02-13T15:40:40.893187747Z" level=info msg="CreateContainer within sandbox \"692d4cdd1b798127c64f22c4b9ce239774963166ff33bc09bb7940901783ac76\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7576f31387a65fd444af4e5e829bf41884e1f99d5e7355be64170f5cd1cd2b4\"" Feb 13 15:40:40.893919 containerd[1726]: time="2025-02-13T15:40:40.893825364Z" level=info msg="StartContainer for \"e7576f31387a65fd444af4e5e829bf41884e1f99d5e7355be64170f5cd1cd2b4\"" Feb 13 15:40:40.930072 systemd[1]: Started cri-containerd-e7576f31387a65fd444af4e5e829bf41884e1f99d5e7355be64170f5cd1cd2b4.scope - libcontainer container e7576f31387a65fd444af4e5e829bf41884e1f99d5e7355be64170f5cd1cd2b4. Feb 13 15:40:40.959544 containerd[1726]: time="2025-02-13T15:40:40.959501062Z" level=info msg="StartContainer for \"e7576f31387a65fd444af4e5e829bf41884e1f99d5e7355be64170f5cd1cd2b4\" returns successfully" Feb 13 15:40:41.033582 kubelet[2557]: E0213 15:40:41.033520 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:41.119412 kubelet[2557]: I0213 15:40:41.118782 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rr46t" podStartSLOduration=2.7271247819999997 podStartE2EDuration="6.118735821s" podCreationTimestamp="2025-02-13 15:40:35 +0000 UTC" firstStartedPulling="2025-02-13 15:40:37.42017128 +0000 UTC m=+3.326143330" lastFinishedPulling="2025-02-13 15:40:40.811782319 +0000 UTC m=+6.717754369" observedRunningTime="2025-02-13 15:40:41.118527415 +0000 UTC m=+7.024499465" watchObservedRunningTime="2025-02-13 15:40:41.118735821 +0000 UTC m=+7.024707971" Feb 13 15:40:42.034407 kubelet[2557]: E0213 15:40:42.034373 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:42.081202 kubelet[2557]: E0213 15:40:42.081142 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:43.035507 kubelet[2557]: E0213 15:40:43.035435 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:44.035717 kubelet[2557]: E0213 15:40:44.035602 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:44.082756 kubelet[2557]: E0213 15:40:44.082114 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:44.833063 containerd[1726]: time="2025-02-13T15:40:44.833001426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:44.835465 containerd[1726]: time="2025-02-13T15:40:44.835393295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:40:44.838725 containerd[1726]: time="2025-02-13T15:40:44.838657889Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:44.843704 containerd[1726]: time="2025-02-13T15:40:44.843647833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:44.844891 containerd[1726]: time="2025-02-13T15:40:44.844374054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.032036721s" Feb 13 15:40:44.844891 containerd[1726]: time="2025-02-13T15:40:44.844411055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:40:44.846712 containerd[1726]: time="2025-02-13T15:40:44.846679921Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:40:44.897871 containerd[1726]: time="2025-02-13T15:40:44.897817897Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa\"" Feb 13 15:40:44.898581 containerd[1726]: time="2025-02-13T15:40:44.898447515Z" level=info msg="StartContainer for \"fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa\"" Feb 13 15:40:44.928686 systemd[1]: run-containerd-runc-k8s.io-fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa-runc.f7JK77.mount: Deactivated successfully. Feb 13 15:40:44.936053 systemd[1]: Started cri-containerd-fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa.scope - libcontainer container fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa. Feb 13 15:40:44.967182 containerd[1726]: time="2025-02-13T15:40:44.966296974Z" level=info msg="StartContainer for \"fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa\" returns successfully" Feb 13 15:40:45.036265 kubelet[2557]: E0213 15:40:45.036207 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:46.036747 kubelet[2557]: E0213 15:40:46.036690 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:46.081264 kubelet[2557]: E0213 15:40:46.081185 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:46.331474 containerd[1726]: time="2025-02-13T15:40:46.331090471Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:40:46.333291 systemd[1]: cri-containerd-fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa.scope: Deactivated successfully. Feb 13 15:40:46.337962 kubelet[2557]: I0213 15:40:46.337653 2557 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:40:46.356463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa-rootfs.mount: Deactivated successfully. Feb 13 15:40:47.037607 kubelet[2557]: E0213 15:40:47.037533 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:48.038061 kubelet[2557]: E0213 15:40:48.037925 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:48.085867 systemd[1]: Created slice kubepods-besteffort-pod357ee354_ebda_4e13_a2f3_9c1549b2abf5.slice - libcontainer container kubepods-besteffort-pod357ee354_ebda_4e13_a2f3_9c1549b2abf5.slice. Feb 13 15:40:48.088453 containerd[1726]: time="2025-02-13T15:40:48.088414299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:0,}" Feb 13 15:40:48.765311 containerd[1726]: time="2025-02-13T15:40:48.765232737Z" level=info msg="shim disconnected" id=fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa namespace=k8s.io Feb 13 15:40:48.765311 containerd[1726]: time="2025-02-13T15:40:48.765295539Z" level=warning msg="cleaning up after shim disconnected" id=fa7a7dcb17cac34fdf15d6afc5a89132a373d04a73797718fa083d4ecea8deaa namespace=k8s.io Feb 13 15:40:48.765311 containerd[1726]: time="2025-02-13T15:40:48.765309439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:48.839245 containerd[1726]: time="2025-02-13T15:40:48.839186672Z" level=error msg="Failed to destroy network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:48.839996 containerd[1726]: time="2025-02-13T15:40:48.839551482Z" level=error msg="encountered an error cleaning up failed sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:48.839996 containerd[1726]: time="2025-02-13T15:40:48.839645585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:48.841725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1-shm.mount: Deactivated successfully. Feb 13 15:40:48.842561 kubelet[2557]: E0213 15:40:48.841864 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:48.842561 kubelet[2557]: E0213 15:40:48.841960 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:48.842561 kubelet[2557]: E0213 15:40:48.841990 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:48.842725 kubelet[2557]: E0213 15:40:48.842078 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:49.039064 kubelet[2557]: E0213 15:40:49.038881 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:49.122941 kubelet[2557]: I0213 15:40:49.122880 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1" Feb 13 15:40:49.124063 containerd[1726]: time="2025-02-13T15:40:49.123718185Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:40:49.124063 containerd[1726]: time="2025-02-13T15:40:49.124045595Z" level=info msg="Ensure that sandbox 89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1 in task-service has been cleanup successfully" Feb 13 15:40:49.127680 containerd[1726]: time="2025-02-13T15:40:49.125989151Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:40:49.127680 containerd[1726]: time="2025-02-13T15:40:49.126020252Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:40:49.127680 containerd[1726]: time="2025-02-13T15:40:49.126578868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:1,}" Feb 13 15:40:49.126826 systemd[1]: run-netns-cni\x2d8ce3438b\x2dc6fd\x2da15a\x2d79f4\x2d859f13442271.mount: Deactivated successfully. Feb 13 15:40:49.130407 containerd[1726]: time="2025-02-13T15:40:49.130281175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:40:49.244439 containerd[1726]: time="2025-02-13T15:40:49.244378868Z" level=error msg="Failed to destroy network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:49.244744 containerd[1726]: time="2025-02-13T15:40:49.244713978Z" level=error msg="encountered an error cleaning up failed sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:49.244838 containerd[1726]: time="2025-02-13T15:40:49.244786480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:49.245143 kubelet[2557]: E0213 15:40:49.245098 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:49.245278 kubelet[2557]: E0213 15:40:49.245175 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:49.245278 kubelet[2557]: E0213 15:40:49.245205 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:49.245416 kubelet[2557]: E0213 15:40:49.245283 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:49.784568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a-shm.mount: Deactivated successfully. Feb 13 15:40:50.039710 kubelet[2557]: E0213 15:40:50.039550 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:50.132565 kubelet[2557]: I0213 15:40:50.132521 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a" Feb 13 15:40:50.133664 containerd[1726]: time="2025-02-13T15:40:50.133542136Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:40:50.134258 containerd[1726]: time="2025-02-13T15:40:50.133965748Z" level=info msg="Ensure that sandbox d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a in task-service has been cleanup successfully" Feb 13 15:40:50.136064 containerd[1726]: time="2025-02-13T15:40:50.136000007Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:40:50.136064 containerd[1726]: time="2025-02-13T15:40:50.136046108Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:40:50.136455 containerd[1726]: time="2025-02-13T15:40:50.136405718Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:40:50.136587 containerd[1726]: time="2025-02-13T15:40:50.136533222Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:40:50.136587 containerd[1726]: time="2025-02-13T15:40:50.136553123Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:40:50.137856 containerd[1726]: time="2025-02-13T15:40:50.137822359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:2,}" Feb 13 15:40:50.138516 systemd[1]: run-netns-cni\x2dd5ad5c42\x2d0b44\x2dc6f8\x2d0db4\x2de4ce8ddaf6bc.mount: Deactivated successfully. Feb 13 15:40:50.366195 containerd[1726]: time="2025-02-13T15:40:50.365584634Z" level=error msg="Failed to destroy network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:50.367823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6-shm.mount: Deactivated successfully. Feb 13 15:40:50.368251 containerd[1726]: time="2025-02-13T15:40:50.368202610Z" level=error msg="encountered an error cleaning up failed sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:50.368351 containerd[1726]: time="2025-02-13T15:40:50.368307013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:50.368991 kubelet[2557]: E0213 15:40:50.368613 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:50.368991 kubelet[2557]: E0213 15:40:50.368696 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:50.368991 kubelet[2557]: E0213 15:40:50.368724 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:50.370465 kubelet[2557]: E0213 15:40:50.370429 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:50.874508 kubelet[2557]: I0213 15:40:50.874451 2557 topology_manager.go:215] "Topology Admit Handler" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" podNamespace="default" podName="nginx-deployment-6d5f899847-2dgjq" Feb 13 15:40:50.881777 systemd[1]: Created slice kubepods-besteffort-pod7eb39159_e5bd_483e_be22_523627c9b8b6.slice - libcontainer container kubepods-besteffort-pod7eb39159_e5bd_483e_be22_523627c9b8b6.slice. Feb 13 15:40:51.040629 kubelet[2557]: E0213 15:40:51.040557 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:51.058838 kubelet[2557]: I0213 15:40:51.058785 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzvgh\" (UniqueName: \"kubernetes.io/projected/7eb39159-e5bd-483e-be22-523627c9b8b6-kube-api-access-jzvgh\") pod \"nginx-deployment-6d5f899847-2dgjq\" (UID: \"7eb39159-e5bd-483e-be22-523627c9b8b6\") " pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:40:51.136356 kubelet[2557]: I0213 15:40:51.136308 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6" Feb 13 15:40:51.137258 containerd[1726]: time="2025-02-13T15:40:51.137217409Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:40:51.137851 containerd[1726]: time="2025-02-13T15:40:51.137513117Z" level=info msg="Ensure that sandbox e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6 in task-service has been cleanup successfully" Feb 13 15:40:51.139960 containerd[1726]: time="2025-02-13T15:40:51.138115535Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:40:51.139960 containerd[1726]: time="2025-02-13T15:40:51.138141935Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:40:51.140218 containerd[1726]: time="2025-02-13T15:40:51.140192195Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:40:51.140318 containerd[1726]: time="2025-02-13T15:40:51.140298398Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:40:51.140367 containerd[1726]: time="2025-02-13T15:40:51.140319698Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:40:51.140753 containerd[1726]: time="2025-02-13T15:40:51.140723710Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:40:51.140842 containerd[1726]: time="2025-02-13T15:40:51.140820313Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:40:51.140884 containerd[1726]: time="2025-02-13T15:40:51.140844613Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:40:51.141370 containerd[1726]: time="2025-02-13T15:40:51.141341128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:3,}" Feb 13 15:40:51.141970 systemd[1]: run-netns-cni\x2d497653fb\x2d4d56\x2dc163\x2d1147\x2d5c5a892099d8.mount: Deactivated successfully. Feb 13 15:40:51.213621 containerd[1726]: time="2025-02-13T15:40:51.213569213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:0,}" Feb 13 15:40:51.704947 containerd[1726]: time="2025-02-13T15:40:51.704876795Z" level=error msg="Failed to destroy network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.705514 containerd[1726]: time="2025-02-13T15:40:51.705478413Z" level=error msg="encountered an error cleaning up failed sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.705691 containerd[1726]: time="2025-02-13T15:40:51.705667318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.706688 kubelet[2557]: E0213 15:40:51.706101 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.706688 kubelet[2557]: E0213 15:40:51.706174 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:51.706688 kubelet[2557]: E0213 15:40:51.706212 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:40:51.706928 kubelet[2557]: E0213 15:40:51.706346 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:40:51.793501 containerd[1726]: time="2025-02-13T15:40:51.793435452Z" level=error msg="Failed to destroy network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.794795 containerd[1726]: time="2025-02-13T15:40:51.793883165Z" level=error msg="encountered an error cleaning up failed sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.794795 containerd[1726]: time="2025-02-13T15:40:51.794002768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.795124 kubelet[2557]: E0213 15:40:51.794310 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:40:51.795124 kubelet[2557]: E0213 15:40:51.794377 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:40:51.795124 kubelet[2557]: E0213 15:40:51.794407 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:40:51.795290 kubelet[2557]: E0213 15:40:51.794481 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dgjq" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" Feb 13 15:40:52.040965 kubelet[2557]: E0213 15:40:52.040777 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:52.141772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358-shm.mount: Deactivated successfully. Feb 13 15:40:52.142176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0-shm.mount: Deactivated successfully. Feb 13 15:40:52.143952 kubelet[2557]: I0213 15:40:52.143563 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0" Feb 13 15:40:52.145069 kubelet[2557]: I0213 15:40:52.145037 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358" Feb 13 15:40:52.146523 containerd[1726]: time="2025-02-13T15:40:52.146101551Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:40:52.146523 containerd[1726]: time="2025-02-13T15:40:52.146365759Z" level=info msg="Ensure that sandbox a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358 in task-service has been cleanup successfully" Feb 13 15:40:52.147573 containerd[1726]: time="2025-02-13T15:40:52.147196684Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:40:52.147573 containerd[1726]: time="2025-02-13T15:40:52.147453391Z" level=info msg="Ensure that sandbox 4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0 in task-service has been cleanup successfully" Feb 13 15:40:52.147804 containerd[1726]: time="2025-02-13T15:40:52.147758701Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:40:52.147898 containerd[1726]: time="2025-02-13T15:40:52.147880204Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:40:52.148765 containerd[1726]: time="2025-02-13T15:40:52.148488223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:1,}" Feb 13 15:40:52.149067 containerd[1726]: time="2025-02-13T15:40:52.149041239Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:40:52.149168 containerd[1726]: time="2025-02-13T15:40:52.149151943Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:40:52.149668 containerd[1726]: time="2025-02-13T15:40:52.149603656Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:40:52.149934 containerd[1726]: time="2025-02-13T15:40:52.149843564Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:40:52.149934 containerd[1726]: time="2025-02-13T15:40:52.149864064Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:40:52.150672 containerd[1726]: time="2025-02-13T15:40:52.150490583Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:40:52.150672 containerd[1726]: time="2025-02-13T15:40:52.150594886Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:40:52.150672 containerd[1726]: time="2025-02-13T15:40:52.150611487Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:40:52.151815 containerd[1726]: time="2025-02-13T15:40:52.151549015Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:40:52.151815 containerd[1726]: time="2025-02-13T15:40:52.151641318Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:40:52.151815 containerd[1726]: time="2025-02-13T15:40:52.151655318Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:40:52.152340 systemd[1]: run-netns-cni\x2dbebb1b54\x2d7f53\x2d2bb9\x2da763\x2d1ca92da1137c.mount: Deactivated successfully. Feb 13 15:40:52.152714 systemd[1]: run-netns-cni\x2df62dc431\x2dc3de\x2dd95d\x2d46c1\x2d321b75675559.mount: Deactivated successfully. Feb 13 15:40:52.153341 containerd[1726]: time="2025-02-13T15:40:52.152946657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:4,}" Feb 13 15:40:53.041648 kubelet[2557]: E0213 15:40:53.041578 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:54.042590 kubelet[2557]: E0213 15:40:54.042520 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:55.029961 kubelet[2557]: E0213 15:40:55.029875 2557 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:55.043201 kubelet[2557]: E0213 15:40:55.043141 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:56.043451 kubelet[2557]: E0213 15:40:56.043384 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:57.044362 kubelet[2557]: E0213 15:40:57.044292 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:58.044888 kubelet[2557]: E0213 15:40:58.044821 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:40:59.045331 kubelet[2557]: E0213 15:40:59.045211 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:00.046087 kubelet[2557]: E0213 15:41:00.045964 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:00.191231 containerd[1726]: time="2025-02-13T15:41:00.190819532Z" level=error msg="Failed to destroy network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.192481 containerd[1726]: time="2025-02-13T15:41:00.192219971Z" level=error msg="encountered an error cleaning up failed sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.192481 containerd[1726]: time="2025-02-13T15:41:00.192309873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.192769 kubelet[2557]: E0213 15:41:00.192648 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.192769 kubelet[2557]: E0213 15:41:00.192737 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:00.192769 kubelet[2557]: E0213 15:41:00.192766 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:00.192962 kubelet[2557]: E0213 15:41:00.192839 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:41:00.238714 containerd[1726]: time="2025-02-13T15:41:00.238655867Z" level=error msg="Failed to destroy network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.239322 containerd[1726]: time="2025-02-13T15:41:00.239198482Z" level=error msg="encountered an error cleaning up failed sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.239322 containerd[1726]: time="2025-02-13T15:41:00.239300485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.239653 kubelet[2557]: E0213 15:41:00.239621 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:00.239719 kubelet[2557]: E0213 15:41:00.239693 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:00.239761 kubelet[2557]: E0213 15:41:00.239722 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:00.240013 kubelet[2557]: E0213 15:41:00.239803 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dgjq" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" Feb 13 15:41:00.615643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f-shm.mount: Deactivated successfully. Feb 13 15:41:00.616276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2-shm.mount: Deactivated successfully. Feb 13 15:41:01.047005 kubelet[2557]: E0213 15:41:01.046868 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:01.166811 kubelet[2557]: I0213 15:41:01.165940 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2" Feb 13 15:41:01.167768 containerd[1726]: time="2025-02-13T15:41:01.167160885Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:01.167768 containerd[1726]: time="2025-02-13T15:41:01.167421992Z" level=info msg="Ensure that sandbox 87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2 in task-service has been cleanup successfully" Feb 13 15:41:01.168001 containerd[1726]: time="2025-02-13T15:41:01.167975208Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:01.168092 containerd[1726]: time="2025-02-13T15:41:01.168076111Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:01.168521 containerd[1726]: time="2025-02-13T15:41:01.168493622Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:01.168710 containerd[1726]: time="2025-02-13T15:41:01.168691828Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:01.168805 containerd[1726]: time="2025-02-13T15:41:01.168790330Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:01.169496 containerd[1726]: time="2025-02-13T15:41:01.169470849Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:01.169673 containerd[1726]: time="2025-02-13T15:41:01.169655955Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:01.169744 containerd[1726]: time="2025-02-13T15:41:01.169730257Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:01.170254 containerd[1726]: time="2025-02-13T15:41:01.170229771Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:01.170433 containerd[1726]: time="2025-02-13T15:41:01.170414976Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:01.170520 containerd[1726]: time="2025-02-13T15:41:01.170505978Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:01.170784 kubelet[2557]: I0213 15:41:01.170766 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f" Feb 13 15:41:01.172141 containerd[1726]: time="2025-02-13T15:41:01.171348702Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:01.172141 containerd[1726]: time="2025-02-13T15:41:01.171642910Z" level=info msg="Ensure that sandbox 1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f in task-service has been cleanup successfully" Feb 13 15:41:01.172715 containerd[1726]: time="2025-02-13T15:41:01.172674439Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:01.172833 containerd[1726]: time="2025-02-13T15:41:01.172817843Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:01.173218 containerd[1726]: time="2025-02-13T15:41:01.173198053Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:01.173423 containerd[1726]: time="2025-02-13T15:41:01.173405159Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:01.173498 containerd[1726]: time="2025-02-13T15:41:01.173485062Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:01.173646 containerd[1726]: time="2025-02-13T15:41:01.173630166Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:01.173779 containerd[1726]: time="2025-02-13T15:41:01.173763869Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:01.173853 containerd[1726]: time="2025-02-13T15:41:01.173840671Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:01.174646 containerd[1726]: time="2025-02-13T15:41:01.174617193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:5,}" Feb 13 15:41:01.176200 systemd[1]: run-netns-cni\x2d6b6340f4\x2d1b5f\x2d0d2f\x2dea68\x2d53259b0548ed.mount: Deactivated successfully. Feb 13 15:41:01.177765 containerd[1726]: time="2025-02-13T15:41:01.176875056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:2,}" Feb 13 15:41:01.180781 systemd[1]: run-netns-cni\x2d7163daf8\x2d22c1\x2d7265\x2d5c39\x2dabeb1338b04c.mount: Deactivated successfully. Feb 13 15:41:01.879767 containerd[1726]: time="2025-02-13T15:41:01.879703275Z" level=error msg="Failed to destroy network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.884693 containerd[1726]: time="2025-02-13T15:41:01.884639612Z" level=error msg="encountered an error cleaning up failed sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.884822 containerd[1726]: time="2025-02-13T15:41:01.884741315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.886458 kubelet[2557]: E0213 15:41:01.885203 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.886458 kubelet[2557]: E0213 15:41:01.885279 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:01.886458 kubelet[2557]: E0213 15:41:01.885308 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:01.886665 kubelet[2557]: E0213 15:41:01.885388 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:41:01.886936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379-shm.mount: Deactivated successfully. Feb 13 15:41:01.959571 containerd[1726]: time="2025-02-13T15:41:01.958878985Z" level=error msg="Failed to destroy network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.959571 containerd[1726]: time="2025-02-13T15:41:01.959543503Z" level=error msg="encountered an error cleaning up failed sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.959784 containerd[1726]: time="2025-02-13T15:41:01.959620705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.960579 kubelet[2557]: E0213 15:41:01.960104 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:01.960579 kubelet[2557]: E0213 15:41:01.960175 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:01.960579 kubelet[2557]: E0213 15:41:01.960210 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:01.960847 kubelet[2557]: E0213 15:41:01.960284 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dgjq" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" Feb 13 15:41:02.047117 kubelet[2557]: E0213 15:41:02.047071 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:02.179698 kubelet[2557]: I0213 15:41:02.178551 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379" Feb 13 15:41:02.179849 containerd[1726]: time="2025-02-13T15:41:02.179690848Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:02.179979 containerd[1726]: time="2025-02-13T15:41:02.179956656Z" level=info msg="Ensure that sandbox adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379 in task-service has been cleanup successfully" Feb 13 15:41:02.182484 containerd[1726]: time="2025-02-13T15:41:02.180753478Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:02.182484 containerd[1726]: time="2025-02-13T15:41:02.180993385Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:02.182784 containerd[1726]: time="2025-02-13T15:41:02.182762734Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:02.182970 containerd[1726]: time="2025-02-13T15:41:02.182950139Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:02.183062 containerd[1726]: time="2025-02-13T15:41:02.183048942Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:02.183667 containerd[1726]: time="2025-02-13T15:41:02.183644759Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:02.183847 containerd[1726]: time="2025-02-13T15:41:02.183828664Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:02.184477 containerd[1726]: time="2025-02-13T15:41:02.183932867Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:02.184965 kubelet[2557]: I0213 15:41:02.184768 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925" Feb 13 15:41:02.185762 containerd[1726]: time="2025-02-13T15:41:02.185717316Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:02.186087 containerd[1726]: time="2025-02-13T15:41:02.186064426Z" level=info msg="Ensure that sandbox d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925 in task-service has been cleanup successfully" Feb 13 15:41:02.186314 containerd[1726]: time="2025-02-13T15:41:02.186296433Z" level=info msg="TearDown network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" successfully" Feb 13 15:41:02.186402 containerd[1726]: time="2025-02-13T15:41:02.186387735Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" returns successfully" Feb 13 15:41:02.186582 containerd[1726]: time="2025-02-13T15:41:02.186562840Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:02.186746 containerd[1726]: time="2025-02-13T15:41:02.186728245Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:02.186825 containerd[1726]: time="2025-02-13T15:41:02.186811747Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187187657Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187277260Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187291960Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187387763Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187454565Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:02.187717 containerd[1726]: time="2025-02-13T15:41:02.187465865Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:02.188229 containerd[1726]: time="2025-02-13T15:41:02.188209986Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:02.188378 containerd[1726]: time="2025-02-13T15:41:02.188361090Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:02.188456 containerd[1726]: time="2025-02-13T15:41:02.188440492Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:02.188586 containerd[1726]: time="2025-02-13T15:41:02.188570196Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:02.188763 containerd[1726]: time="2025-02-13T15:41:02.188707400Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:02.188833 containerd[1726]: time="2025-02-13T15:41:02.188819703Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:02.189464 containerd[1726]: time="2025-02-13T15:41:02.189440320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:6,}" Feb 13 15:41:02.190848 containerd[1726]: time="2025-02-13T15:41:02.190821459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:3,}" Feb 13 15:41:02.774674 systemd[1]: run-netns-cni\x2dff283b82\x2db068\x2def9a\x2dc8e0\x2d626a85f4a217.mount: Deactivated successfully. Feb 13 15:41:02.774786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925-shm.mount: Deactivated successfully. Feb 13 15:41:02.774871 systemd[1]: run-netns-cni\x2d0d35d4a9\x2dd90c\x2d85b8\x2dcce1\x2d47b8d6d4e75c.mount: Deactivated successfully. Feb 13 15:41:02.860018 containerd[1726]: time="2025-02-13T15:41:02.859947137Z" level=error msg="Failed to destroy network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.862563 containerd[1726]: time="2025-02-13T15:41:02.862494908Z" level=error msg="encountered an error cleaning up failed sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.862796 containerd[1726]: time="2025-02-13T15:41:02.862754715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.865066 kubelet[2557]: E0213 15:41:02.864522 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.865066 kubelet[2557]: E0213 15:41:02.864604 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:02.865066 kubelet[2557]: E0213 15:41:02.864632 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:02.865277 kubelet[2557]: E0213 15:41:02.864707 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:41:02.866024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5-shm.mount: Deactivated successfully. Feb 13 15:41:02.902545 containerd[1726]: time="2025-02-13T15:41:02.902481224Z" level=error msg="Failed to destroy network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.905935 containerd[1726]: time="2025-02-13T15:41:02.903409150Z" level=error msg="encountered an error cleaning up failed sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.905935 containerd[1726]: time="2025-02-13T15:41:02.903497252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.906174 kubelet[2557]: E0213 15:41:02.903803 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:02.906174 kubelet[2557]: E0213 15:41:02.903880 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:02.906174 kubelet[2557]: E0213 15:41:02.903928 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:02.906334 kubelet[2557]: E0213 15:41:02.904009 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dgjq" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" Feb 13 15:41:02.907151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741-shm.mount: Deactivated successfully. Feb 13 15:41:03.048255 kubelet[2557]: E0213 15:41:03.047918 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:03.152651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309912858.mount: Deactivated successfully. Feb 13 15:41:03.189435 kubelet[2557]: I0213 15:41:03.189399 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5" Feb 13 15:41:03.190319 containerd[1726]: time="2025-02-13T15:41:03.190241456Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:03.190521 containerd[1726]: time="2025-02-13T15:41:03.190494263Z" level=info msg="Ensure that sandbox 49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5 in task-service has been cleanup successfully" Feb 13 15:41:03.192888 containerd[1726]: time="2025-02-13T15:41:03.190788672Z" level=info msg="TearDown network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" successfully" Feb 13 15:41:03.192888 containerd[1726]: time="2025-02-13T15:41:03.190814272Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" returns successfully" Feb 13 15:41:03.192888 containerd[1726]: time="2025-02-13T15:41:03.191244584Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:03.192888 containerd[1726]: time="2025-02-13T15:41:03.191337787Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:03.192888 containerd[1726]: time="2025-02-13T15:41:03.191390688Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:03.193370 systemd[1]: run-netns-cni\x2dba66ac62\x2d411b\x2de9f0\x2dd1b6\x2d160c508bfa3a.mount: Deactivated successfully. Feb 13 15:41:03.193651 containerd[1726]: time="2025-02-13T15:41:03.193583250Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:03.193709 containerd[1726]: time="2025-02-13T15:41:03.193662852Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:03.193709 containerd[1726]: time="2025-02-13T15:41:03.193677152Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:03.194454 containerd[1726]: time="2025-02-13T15:41:03.194432273Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:03.194558 containerd[1726]: time="2025-02-13T15:41:03.194512175Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:03.194558 containerd[1726]: time="2025-02-13T15:41:03.194526376Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:03.195648 containerd[1726]: time="2025-02-13T15:41:03.195459102Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:03.195648 containerd[1726]: time="2025-02-13T15:41:03.195540704Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:03.195648 containerd[1726]: time="2025-02-13T15:41:03.195555205Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:03.196340 containerd[1726]: time="2025-02-13T15:41:03.196260124Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:03.196582 containerd[1726]: time="2025-02-13T15:41:03.196351727Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:03.196582 containerd[1726]: time="2025-02-13T15:41:03.196366127Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:03.196778 kubelet[2557]: I0213 15:41:03.196749 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741" Feb 13 15:41:03.197599 containerd[1726]: time="2025-02-13T15:41:03.197498859Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" Feb 13 15:41:03.197599 containerd[1726]: time="2025-02-13T15:41:03.197568161Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:03.197914 containerd[1726]: time="2025-02-13T15:41:03.197728065Z" level=info msg="Ensure that sandbox b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741 in task-service has been cleanup successfully" Feb 13 15:41:03.197914 containerd[1726]: time="2025-02-13T15:41:03.197828268Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:03.197914 containerd[1726]: time="2025-02-13T15:41:03.197843868Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:03.197914 containerd[1726]: time="2025-02-13T15:41:03.197894370Z" level=info msg="TearDown network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" successfully" Feb 13 15:41:03.198290 containerd[1726]: time="2025-02-13T15:41:03.197923471Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" returns successfully" Feb 13 15:41:03.198506 containerd[1726]: time="2025-02-13T15:41:03.198435285Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:03.198575 containerd[1726]: time="2025-02-13T15:41:03.198525187Z" level=info msg="TearDown network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" successfully" Feb 13 15:41:03.198575 containerd[1726]: time="2025-02-13T15:41:03.198540388Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" returns successfully" Feb 13 15:41:03.198653 containerd[1726]: time="2025-02-13T15:41:03.198443685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:7,}" Feb 13 15:41:03.199105 containerd[1726]: time="2025-02-13T15:41:03.199081703Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:03.199182 containerd[1726]: time="2025-02-13T15:41:03.199168705Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:03.199237 containerd[1726]: time="2025-02-13T15:41:03.199184006Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:03.199558 containerd[1726]: time="2025-02-13T15:41:03.199440913Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:03.199558 containerd[1726]: time="2025-02-13T15:41:03.199535116Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:03.199558 containerd[1726]: time="2025-02-13T15:41:03.199550116Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:03.199936 containerd[1726]: time="2025-02-13T15:41:03.199898526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:4,}" Feb 13 15:41:03.767345 systemd[1]: run-netns-cni\x2db3cd93a5\x2d2013\x2d5e62\x2d0d91\x2defe62011fd27.mount: Deactivated successfully. Feb 13 15:41:03.817486 containerd[1726]: time="2025-02-13T15:41:03.817402262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:03.959041 containerd[1726]: time="2025-02-13T15:41:03.958946413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:41:04.048666 kubelet[2557]: E0213 15:41:04.048505 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:04.054679 containerd[1726]: time="2025-02-13T15:41:04.054593283Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:04.103526 containerd[1726]: time="2025-02-13T15:41:04.103467348Z" level=error msg="Failed to destroy network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.103880 containerd[1726]: time="2025-02-13T15:41:04.103844858Z" level=error msg="encountered an error cleaning up failed sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.104055 containerd[1726]: time="2025-02-13T15:41:04.103945361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.104365 kubelet[2557]: E0213 15:41:04.104320 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.104476 kubelet[2557]: E0213 15:41:04.104390 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:04.104476 kubelet[2557]: E0213 15:41:04.104417 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:04.104568 kubelet[2557]: E0213 15:41:04.104492 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:41:04.204040 kubelet[2557]: I0213 15:41:04.203794 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21" Feb 13 15:41:04.205829 containerd[1726]: time="2025-02-13T15:41:04.205412193Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" Feb 13 15:41:04.205991 containerd[1726]: time="2025-02-13T15:41:04.205831305Z" level=info msg="Ensure that sandbox d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21 in task-service has been cleanup successfully" Feb 13 15:41:04.206079 containerd[1726]: time="2025-02-13T15:41:04.206052211Z" level=info msg="TearDown network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" successfully" Feb 13 15:41:04.206316 containerd[1726]: time="2025-02-13T15:41:04.206121313Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" returns successfully" Feb 13 15:41:04.206620 containerd[1726]: time="2025-02-13T15:41:04.206595026Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:04.206771 containerd[1726]: time="2025-02-13T15:41:04.206710929Z" level=info msg="TearDown network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" successfully" Feb 13 15:41:04.206771 containerd[1726]: time="2025-02-13T15:41:04.206729430Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" returns successfully" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208094968Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208188871Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208204271Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208493979Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208571981Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:04.208921 containerd[1726]: time="2025-02-13T15:41:04.208585682Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:04.209197 containerd[1726]: time="2025-02-13T15:41:04.209056895Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:04.209197 containerd[1726]: time="2025-02-13T15:41:04.209155198Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:04.209197 containerd[1726]: time="2025-02-13T15:41:04.209169598Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:04.209778 containerd[1726]: time="2025-02-13T15:41:04.209638311Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:04.209880 containerd[1726]: time="2025-02-13T15:41:04.209860317Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:04.209946 containerd[1726]: time="2025-02-13T15:41:04.209881818Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:04.210461 containerd[1726]: time="2025-02-13T15:41:04.210433733Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:04.210581 containerd[1726]: time="2025-02-13T15:41:04.210560237Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:04.210639 containerd[1726]: time="2025-02-13T15:41:04.210581237Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:04.211047 containerd[1726]: time="2025-02-13T15:41:04.211019650Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:04.211131 containerd[1726]: time="2025-02-13T15:41:04.211112352Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:04.211131 containerd[1726]: time="2025-02-13T15:41:04.211127053Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:04.212485 containerd[1726]: time="2025-02-13T15:41:04.212454090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:8,}" Feb 13 15:41:04.219396 containerd[1726]: time="2025-02-13T15:41:04.219365183Z" level=error msg="Failed to destroy network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.219709 containerd[1726]: time="2025-02-13T15:41:04.219681692Z" level=error msg="encountered an error cleaning up failed sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.219788 containerd[1726]: time="2025-02-13T15:41:04.219750793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.220002 kubelet[2557]: E0213 15:41:04.219976 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:04.220097 kubelet[2557]: E0213 15:41:04.220042 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:04.220097 kubelet[2557]: E0213 15:41:04.220074 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dgjq" Feb 13 15:41:04.220193 kubelet[2557]: E0213 15:41:04.220143 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dgjq_default(7eb39159-e5bd-483e-be22-523627c9b8b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dgjq" podUID="7eb39159-e5bd-483e-be22-523627c9b8b6" Feb 13 15:41:04.511787 containerd[1726]: time="2025-02-13T15:41:04.511655042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:04.512797 containerd[1726]: time="2025-02-13T15:41:04.512524166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 15.38220329s" Feb 13 15:41:04.512797 containerd[1726]: time="2025-02-13T15:41:04.512601968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:41:04.524571 containerd[1726]: time="2025-02-13T15:41:04.524382697Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:41:04.770560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696-shm.mount: Deactivated successfully. Feb 13 15:41:04.770688 systemd[1]: run-netns-cni\x2d6aba919b\x2d0f77\x2d3601\x2da4f0\x2de19b56d23f5f.mount: Deactivated successfully. Feb 13 15:41:04.770767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21-shm.mount: Deactivated successfully. Feb 13 15:41:05.049251 kubelet[2557]: E0213 15:41:05.049049 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:05.206558 containerd[1726]: time="2025-02-13T15:41:05.206502137Z" level=info msg="CreateContainer within sandbox \"17529e362279cfa53e8ccf3310169ad96f4dd3224b7070667b87e2d1816c8098\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856\"" Feb 13 15:41:05.207489 containerd[1726]: time="2025-02-13T15:41:05.207316060Z" level=info msg="StartContainer for \"c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856\"" Feb 13 15:41:05.210081 kubelet[2557]: I0213 15:41:05.209941 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696" Feb 13 15:41:05.212314 containerd[1726]: time="2025-02-13T15:41:05.212224997Z" level=info msg="StopPodSandbox for \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\"" Feb 13 15:41:05.213918 containerd[1726]: time="2025-02-13T15:41:05.212448103Z" level=info msg="Ensure that sandbox 2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696 in task-service has been cleanup successfully" Feb 13 15:41:05.217231 systemd[1]: run-netns-cni\x2ddd77c785\x2d5a90\x2d508e\x2da613\x2dd66f5ad309b8.mount: Deactivated successfully. Feb 13 15:41:05.219032 containerd[1726]: time="2025-02-13T15:41:05.218739579Z" level=info msg="TearDown network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" successfully" Feb 13 15:41:05.219032 containerd[1726]: time="2025-02-13T15:41:05.218762579Z" level=info msg="StopPodSandbox for \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" returns successfully" Feb 13 15:41:05.219275 containerd[1726]: time="2025-02-13T15:41:05.219134590Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" Feb 13 15:41:05.219330 containerd[1726]: time="2025-02-13T15:41:05.219275994Z" level=info msg="TearDown network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" successfully" Feb 13 15:41:05.219330 containerd[1726]: time="2025-02-13T15:41:05.219292894Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" returns successfully" Feb 13 15:41:05.219930 containerd[1726]: time="2025-02-13T15:41:05.219822409Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:05.220087 containerd[1726]: time="2025-02-13T15:41:05.219950313Z" level=info msg="TearDown network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" successfully" Feb 13 15:41:05.220087 containerd[1726]: time="2025-02-13T15:41:05.219968713Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" returns successfully" Feb 13 15:41:05.220481 containerd[1726]: time="2025-02-13T15:41:05.220451026Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:05.220631 containerd[1726]: time="2025-02-13T15:41:05.220548129Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:05.220631 containerd[1726]: time="2025-02-13T15:41:05.220566330Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:05.221278 containerd[1726]: time="2025-02-13T15:41:05.221144746Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:05.221530 containerd[1726]: time="2025-02-13T15:41:05.221478955Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:05.221687 containerd[1726]: time="2025-02-13T15:41:05.221620959Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:05.222485 containerd[1726]: time="2025-02-13T15:41:05.222402381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:5,}" Feb 13 15:41:05.254153 systemd[1]: Started cri-containerd-c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856.scope - libcontainer container c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856. Feb 13 15:41:05.296423 containerd[1726]: time="2025-02-13T15:41:05.295812730Z" level=error msg="Failed to destroy network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:05.296423 containerd[1726]: time="2025-02-13T15:41:05.296226542Z" level=error msg="encountered an error cleaning up failed sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:05.296423 containerd[1726]: time="2025-02-13T15:41:05.296309644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:05.296954 kubelet[2557]: E0213 15:41:05.296816 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:41:05.296954 kubelet[2557]: E0213 15:41:05.296889 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:05.297351 kubelet[2557]: E0213 15:41:05.297236 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wl2j2" Feb 13 15:41:05.297943 kubelet[2557]: E0213 15:41:05.297923 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wl2j2_calico-system(357ee354-ebda-4e13-a2f3-9c1549b2abf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wl2j2" podUID="357ee354-ebda-4e13-a2f3-9c1549b2abf5" Feb 13 15:41:05.369836 containerd[1726]: time="2025-02-13T15:41:05.369584389Z" level=info msg="StartContainer for \"c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856\" returns successfully" Feb 13 15:41:05.632176 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:41:05.632311 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:41:05.772861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998-shm.mount: Deactivated successfully. Feb 13 15:41:05.839469 systemd-networkd[1476]: calif279d7a677d: Link UP Feb 13 15:41:05.839734 systemd-networkd[1476]: calif279d7a677d: Gained carrier Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.697 [INFO][3503] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.709 [INFO][3503] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0 nginx-deployment-6d5f899847- default 7eb39159-e5bd-483e-be22-523627c9b8b6 1238 0 2025-02-13 15:40:50 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.18 nginx-deployment-6d5f899847-2dgjq eth0 default [] [] [kns.default ksa.default.default] calif279d7a677d [] []}} ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.709 [INFO][3503] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.747 [INFO][3520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" HandleID="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Workload="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.759 [INFO][3520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" HandleID="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Workload="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ef0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.18", "pod":"nginx-deployment-6d5f899847-2dgjq", "timestamp":"2025-02-13 15:41:05.747829847 +0000 UTC"}, Hostname:"10.200.8.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.760 [INFO][3520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.760 [INFO][3520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.760 [INFO][3520] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.18' Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.763 [INFO][3520] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.772 [INFO][3520] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.776 [INFO][3520] ipam/ipam.go 521: Ran out of existing affine blocks for host host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.778 [INFO][3520] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.780 [INFO][3520] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.780 [INFO][3520] ipam/ipam.go 550: Found unclaimed block host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.780 [INFO][3520] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.786 [INFO][3520] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.786 [INFO][3520] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.788 [INFO][3520] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.791 [INFO][3520] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.792 [INFO][3520] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.792 [INFO][3520] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.804 [INFO][3520] ipam/ipam_block_reader_writer.go 264: Successfully created block Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.804 [INFO][3520] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.809 [INFO][3520] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.809 [INFO][3520] ipam/ipam.go 585: Block '192.168.60.128/26' has 64 free ips which is more than 1 ips required. host="10.200.8.18" subnet=192.168.60.128/26 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.809 [INFO][3520] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.810 [INFO][3520] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1 Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.814 [INFO][3520] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" host="10.200.8.18" Feb 13 15:41:05.850832 containerd[1726]: 2025-02-13 15:41:05.822 [INFO][3520] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.128/26] block=192.168.60.128/26 handle="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" host="10.200.8.18" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.822 [INFO][3520] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.128/26] handle="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" host="10.200.8.18" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.822 [INFO][3520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.822 [INFO][3520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.128/26] IPv6=[] ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" HandleID="k8s-pod-network.20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Workload="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.824 [INFO][3503] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7eb39159-e5bd-483e-be22-523627c9b8b6", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 40, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"", Pod:"nginx-deployment-6d5f899847-2dgjq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif279d7a677d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.824 [INFO][3503] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.128/32] ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.824 [INFO][3503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif279d7a677d ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.836 [INFO][3503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.837 [INFO][3503] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7eb39159-e5bd-483e-be22-523627c9b8b6", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 40, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1", Pod:"nginx-deployment-6d5f899847-2dgjq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif279d7a677d", MAC:"42:40:3a:02:7e:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:05.852727 containerd[1726]: 2025-02-13 15:41:05.849 [INFO][3503] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1" Namespace="default" Pod="nginx-deployment-6d5f899847-2dgjq" WorkloadEndpoint="10.200.8.18-k8s-nginx--deployment--6d5f899847--2dgjq-eth0" Feb 13 15:41:05.935894 containerd[1726]: time="2025-02-13T15:41:05.935266179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:05.935894 containerd[1726]: time="2025-02-13T15:41:05.935333681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:05.935894 containerd[1726]: time="2025-02-13T15:41:05.935354782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:05.935894 containerd[1726]: time="2025-02-13T15:41:05.935441684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:05.965097 systemd[1]: Started cri-containerd-20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1.scope - libcontainer container 20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1. Feb 13 15:41:06.001863 containerd[1726]: time="2025-02-13T15:41:06.001792536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dgjq,Uid:7eb39159-e5bd-483e-be22-523627c9b8b6,Namespace:default,Attempt:5,} returns sandbox id \"20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1\"" Feb 13 15:41:06.003713 containerd[1726]: time="2025-02-13T15:41:06.003673589Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:41:06.049670 kubelet[2557]: E0213 15:41:06.049626 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:06.223352 kubelet[2557]: I0213 15:41:06.222481 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998" Feb 13 15:41:06.223513 containerd[1726]: time="2025-02-13T15:41:06.222975810Z" level=info msg="StopPodSandbox for \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\"" Feb 13 15:41:06.232122 containerd[1726]: time="2025-02-13T15:41:06.225133371Z" level=info msg="Ensure that sandbox 62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998 in task-service has been cleanup successfully" Feb 13 15:41:06.231859 systemd[1]: run-netns-cni\x2d48a95b9b\x2dc456\x2dd2f6\x2d75c4\x2dafb1b0d50998.mount: Deactivated successfully. Feb 13 15:41:06.238534 containerd[1726]: time="2025-02-13T15:41:06.238501044Z" level=info msg="TearDown network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" successfully" Feb 13 15:41:06.238534 containerd[1726]: time="2025-02-13T15:41:06.238531145Z" level=info msg="StopPodSandbox for \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" returns successfully" Feb 13 15:41:06.239757 containerd[1726]: time="2025-02-13T15:41:06.239724878Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" Feb 13 15:41:06.239980 containerd[1726]: time="2025-02-13T15:41:06.239949984Z" level=info msg="TearDown network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" successfully" Feb 13 15:41:06.240063 containerd[1726]: time="2025-02-13T15:41:06.239978685Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" returns successfully" Feb 13 15:41:06.241431 kubelet[2557]: I0213 15:41:06.241406 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hzzdf" podStartSLOduration=4.1439804 podStartE2EDuration="31.241358924s" podCreationTimestamp="2025-02-13 15:40:35 +0000 UTC" firstStartedPulling="2025-02-13 15:40:37.415863562 +0000 UTC m=+3.321835612" lastFinishedPulling="2025-02-13 15:41:04.513241986 +0000 UTC m=+30.419214136" observedRunningTime="2025-02-13 15:41:06.241101516 +0000 UTC m=+32.147073566" watchObservedRunningTime="2025-02-13 15:41:06.241358924 +0000 UTC m=+32.147331074" Feb 13 15:41:06.245581 containerd[1726]: time="2025-02-13T15:41:06.245553741Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:06.245677 containerd[1726]: time="2025-02-13T15:41:06.245657244Z" level=info msg="TearDown network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" successfully" Feb 13 15:41:06.245734 containerd[1726]: time="2025-02-13T15:41:06.245679444Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" returns successfully" Feb 13 15:41:06.246387 containerd[1726]: time="2025-02-13T15:41:06.246362763Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:06.247092 containerd[1726]: time="2025-02-13T15:41:06.246849677Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:06.247179 containerd[1726]: time="2025-02-13T15:41:06.247091784Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:06.247746 containerd[1726]: time="2025-02-13T15:41:06.247720301Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:06.249365 containerd[1726]: time="2025-02-13T15:41:06.247809004Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:06.249480 containerd[1726]: time="2025-02-13T15:41:06.249461250Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:06.250048 containerd[1726]: time="2025-02-13T15:41:06.250025966Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:06.250303 containerd[1726]: time="2025-02-13T15:41:06.250283673Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:06.250413 containerd[1726]: time="2025-02-13T15:41:06.250385676Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:06.250999 containerd[1726]: time="2025-02-13T15:41:06.250978692Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:06.251212 containerd[1726]: time="2025-02-13T15:41:06.251183798Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:06.251309 containerd[1726]: time="2025-02-13T15:41:06.251294401Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:06.251999 containerd[1726]: time="2025-02-13T15:41:06.251854017Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:06.252361 containerd[1726]: time="2025-02-13T15:41:06.252269128Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:06.252361 containerd[1726]: time="2025-02-13T15:41:06.252288429Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:06.253044 containerd[1726]: time="2025-02-13T15:41:06.252676740Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:06.253044 containerd[1726]: time="2025-02-13T15:41:06.252760342Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:06.253044 containerd[1726]: time="2025-02-13T15:41:06.252772742Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:06.253518 containerd[1726]: time="2025-02-13T15:41:06.253493962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:9,}" Feb 13 15:41:06.606666 systemd-networkd[1476]: calie6b744c0a5c: Link UP Feb 13 15:41:06.608878 systemd-networkd[1476]: calie6b744c0a5c: Gained carrier Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.483 [INFO][3602] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.495 [INFO][3602] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.18-k8s-csi--node--driver--wl2j2-eth0 csi-node-driver- calico-system 357ee354-ebda-4e13-a2f3-9c1549b2abf5 1156 0 2025-02-13 15:40:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.8.18 csi-node-driver-wl2j2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie6b744c0a5c [] []}} ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.495 [INFO][3602] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.525 [INFO][3613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" HandleID="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Workload="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.546 [INFO][3613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" HandleID="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Workload="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310820), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.18", "pod":"csi-node-driver-wl2j2", "timestamp":"2025-02-13 15:41:06.525444453 +0000 UTC"}, Hostname:"10.200.8.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.546 [INFO][3613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.546 [INFO][3613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.546 [INFO][3613] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.18' Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.548 [INFO][3613] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.553 [INFO][3613] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.562 [INFO][3613] ipam/ipam.go 489: Trying affinity for 192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.570 [INFO][3613] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.573 [INFO][3613] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.573 [INFO][3613] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.574 [INFO][3613] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.581 [INFO][3613] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.602 [INFO][3613] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.129/26] block=192.168.60.128/26 handle="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.602 [INFO][3613] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.129/26] handle="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" host="10.200.8.18" Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.602 [INFO][3613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:41:06.621053 containerd[1726]: 2025-02-13 15:41:06.602 [INFO][3613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.129/26] IPv6=[] ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" HandleID="k8s-pod-network.edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Workload="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.604 [INFO][3602] cni-plugin/k8s.go 386: Populated endpoint ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-csi--node--driver--wl2j2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"357ee354-ebda-4e13-a2f3-9c1549b2abf5", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 40, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"", Pod:"csi-node-driver-wl2j2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie6b744c0a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.604 [INFO][3602] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.129/32] ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.604 [INFO][3602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6b744c0a5c ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.607 [INFO][3602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.607 [INFO][3602] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-csi--node--driver--wl2j2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"357ee354-ebda-4e13-a2f3-9c1549b2abf5", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 40, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba", Pod:"csi-node-driver-wl2j2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie6b744c0a5c", MAC:"7a:5c:4e:3e:37:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:06.623519 containerd[1726]: 2025-02-13 15:41:06.618 [INFO][3602] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba" Namespace="calico-system" Pod="csi-node-driver-wl2j2" WorkloadEndpoint="10.200.8.18-k8s-csi--node--driver--wl2j2-eth0" Feb 13 15:41:06.678169 containerd[1726]: time="2025-02-13T15:41:06.677485697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:06.678509 containerd[1726]: time="2025-02-13T15:41:06.678308820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:06.678509 containerd[1726]: time="2025-02-13T15:41:06.678334121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:06.678509 containerd[1726]: time="2025-02-13T15:41:06.678405423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:06.699082 systemd[1]: Started cri-containerd-edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba.scope - libcontainer container edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba. Feb 13 15:41:06.721852 containerd[1726]: time="2025-02-13T15:41:06.721756333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wl2j2,Uid:357ee354-ebda-4e13-a2f3-9c1549b2abf5,Namespace:calico-system,Attempt:9,} returns sandbox id \"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba\"" Feb 13 15:41:06.946188 systemd-networkd[1476]: calif279d7a677d: Gained IPv6LL Feb 13 15:41:07.050542 kubelet[2557]: E0213 15:41:07.050481 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:07.253929 systemd[1]: run-containerd-runc-k8s.io-c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856-runc.bz6rLx.mount: Deactivated successfully. Feb 13 15:41:07.906060 systemd-networkd[1476]: calie6b744c0a5c: Gained IPv6LL Feb 13 15:41:08.051178 kubelet[2557]: E0213 15:41:08.051131 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:08.149011 kernel: bpftool[3820]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:41:08.527066 systemd-networkd[1476]: vxlan.calico: Link UP Feb 13 15:41:08.527077 systemd-networkd[1476]: vxlan.calico: Gained carrier Feb 13 15:41:09.051916 kubelet[2557]: E0213 15:41:09.051849 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:10.052467 kubelet[2557]: E0213 15:41:10.052396 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:10.338517 systemd-networkd[1476]: vxlan.calico: Gained IPv6LL Feb 13 15:41:11.052633 kubelet[2557]: E0213 15:41:11.052546 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:11.066092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276027494.mount: Deactivated successfully. Feb 13 15:41:12.053198 kubelet[2557]: E0213 15:41:12.053143 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:12.378053 containerd[1726]: time="2025-02-13T15:41:12.377995858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:12.380775 containerd[1726]: time="2025-02-13T15:41:12.380704034Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:41:12.388735 containerd[1726]: time="2025-02-13T15:41:12.388662658Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:12.396048 containerd[1726]: time="2025-02-13T15:41:12.395946763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:12.397406 containerd[1726]: time="2025-02-13T15:41:12.397369403Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.393658713s" Feb 13 15:41:12.397519 containerd[1726]: time="2025-02-13T15:41:12.397407504Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:41:12.398987 containerd[1726]: time="2025-02-13T15:41:12.398447133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:41:12.399791 containerd[1726]: time="2025-02-13T15:41:12.399762770Z" level=info msg="CreateContainer within sandbox \"20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:41:12.442509 containerd[1726]: time="2025-02-13T15:41:12.442452072Z" level=info msg="CreateContainer within sandbox \"20e965f083268ef6ca681ae20f75c6fdf5d47d2728eec1591aa2ec5e288074d1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2d200fd925f00ab090cf21533b0d1f8a8868e51db1892a2552dbb713d2cb8116\"" Feb 13 15:41:12.443197 containerd[1726]: time="2025-02-13T15:41:12.443056589Z" level=info msg="StartContainer for \"2d200fd925f00ab090cf21533b0d1f8a8868e51db1892a2552dbb713d2cb8116\"" Feb 13 15:41:12.481103 systemd[1]: Started cri-containerd-2d200fd925f00ab090cf21533b0d1f8a8868e51db1892a2552dbb713d2cb8116.scope - libcontainer container 2d200fd925f00ab090cf21533b0d1f8a8868e51db1892a2552dbb713d2cb8116. Feb 13 15:41:12.508406 containerd[1726]: time="2025-02-13T15:41:12.508361926Z" level=info msg="StartContainer for \"2d200fd925f00ab090cf21533b0d1f8a8868e51db1892a2552dbb713d2cb8116\" returns successfully" Feb 13 15:41:13.054079 kubelet[2557]: E0213 15:41:13.054010 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:13.256310 kubelet[2557]: I0213 15:41:13.256260 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2dgjq" podStartSLOduration=16.861370327 podStartE2EDuration="23.256214473s" podCreationTimestamp="2025-02-13 15:40:50 +0000 UTC" firstStartedPulling="2025-02-13 15:41:06.003293178 +0000 UTC m=+31.909265328" lastFinishedPulling="2025-02-13 15:41:12.398137324 +0000 UTC m=+38.304109474" observedRunningTime="2025-02-13 15:41:13.25610417 +0000 UTC m=+39.162076220" watchObservedRunningTime="2025-02-13 15:41:13.256214473 +0000 UTC m=+39.162186623" Feb 13 15:41:14.054469 kubelet[2557]: E0213 15:41:14.054384 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:14.178349 containerd[1726]: time="2025-02-13T15:41:14.178292222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:14.181265 containerd[1726]: time="2025-02-13T15:41:14.181197604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:41:14.184776 containerd[1726]: time="2025-02-13T15:41:14.184718603Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:14.193249 containerd[1726]: time="2025-02-13T15:41:14.193188742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:14.194290 containerd[1726]: time="2025-02-13T15:41:14.193764558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.795276824s" Feb 13 15:41:14.194290 containerd[1726]: time="2025-02-13T15:41:14.193805159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:41:14.195687 containerd[1726]: time="2025-02-13T15:41:14.195652611Z" level=info msg="CreateContainer within sandbox \"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:41:14.258306 containerd[1726]: time="2025-02-13T15:41:14.258252373Z" level=info msg="CreateContainer within sandbox \"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df\"" Feb 13 15:41:14.258830 containerd[1726]: time="2025-02-13T15:41:14.258795088Z" level=info msg="StartContainer for \"a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df\"" Feb 13 15:41:14.290188 systemd[1]: run-containerd-runc-k8s.io-a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df-runc.vVGwpn.mount: Deactivated successfully. Feb 13 15:41:14.297056 systemd[1]: Started cri-containerd-a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df.scope - libcontainer container a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df. Feb 13 15:41:14.328321 containerd[1726]: time="2025-02-13T15:41:14.328175040Z" level=info msg="StartContainer for \"a661ac3caf3164dd08d333c76ef96aec7ff12cb182672d66bf443ea38200b2df\" returns successfully" Feb 13 15:41:14.329822 containerd[1726]: time="2025-02-13T15:41:14.329571180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:41:15.030199 kubelet[2557]: E0213 15:41:15.030145 2557 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:15.055505 kubelet[2557]: E0213 15:41:15.055429 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:15.900139 containerd[1726]: time="2025-02-13T15:41:15.899701469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:15.902241 containerd[1726]: time="2025-02-13T15:41:15.902169644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:41:15.904872 containerd[1726]: time="2025-02-13T15:41:15.904815624Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:15.915466 containerd[1726]: time="2025-02-13T15:41:15.915402843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:15.916260 containerd[1726]: time="2025-02-13T15:41:15.916091464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.586461683s" Feb 13 15:41:15.916260 containerd[1726]: time="2025-02-13T15:41:15.916131765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:41:15.918163 containerd[1726]: time="2025-02-13T15:41:15.918128926Z" level=info msg="CreateContainer within sandbox \"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:41:15.978881 containerd[1726]: time="2025-02-13T15:41:15.978825560Z" level=info msg="CreateContainer within sandbox \"edd4c3d0240e07016d14c211fe78c3bc11cb925544ef53d1682f31afd6c77bba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6c30d8008ef06e856935d38fe65677a6a631430d5f48bf5538f889c4f6b01912\"" Feb 13 15:41:15.979505 containerd[1726]: time="2025-02-13T15:41:15.979465579Z" level=info msg="StartContainer for \"6c30d8008ef06e856935d38fe65677a6a631430d5f48bf5538f889c4f6b01912\"" Feb 13 15:41:16.017101 systemd[1]: Started cri-containerd-6c30d8008ef06e856935d38fe65677a6a631430d5f48bf5538f889c4f6b01912.scope - libcontainer container 6c30d8008ef06e856935d38fe65677a6a631430d5f48bf5538f889c4f6b01912. Feb 13 15:41:16.049954 containerd[1726]: time="2025-02-13T15:41:16.049312789Z" level=info msg="StartContainer for \"6c30d8008ef06e856935d38fe65677a6a631430d5f48bf5538f889c4f6b01912\" returns successfully" Feb 13 15:41:16.056432 kubelet[2557]: E0213 15:41:16.056361 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:16.157664 kubelet[2557]: I0213 15:41:16.157528 2557 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:41:16.157664 kubelet[2557]: I0213 15:41:16.157568 2557 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:41:16.277439 kubelet[2557]: I0213 15:41:16.277398 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wl2j2" podStartSLOduration=32.083881971 podStartE2EDuration="41.277358479s" podCreationTimestamp="2025-02-13 15:40:35 +0000 UTC" firstStartedPulling="2025-02-13 15:41:06.723175873 +0000 UTC m=+32.629148123" lastFinishedPulling="2025-02-13 15:41:15.916652581 +0000 UTC m=+41.822624631" observedRunningTime="2025-02-13 15:41:16.277200374 +0000 UTC m=+42.183172524" watchObservedRunningTime="2025-02-13 15:41:16.277358479 +0000 UTC m=+42.183330529" Feb 13 15:41:17.056974 kubelet[2557]: E0213 15:41:17.056880 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:18.057873 kubelet[2557]: E0213 15:41:18.057810 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:19.058852 kubelet[2557]: E0213 15:41:19.058784 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:20.059801 kubelet[2557]: E0213 15:41:20.059741 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:21.060500 kubelet[2557]: E0213 15:41:21.060431 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:21.585916 kubelet[2557]: I0213 15:41:21.585843 2557 topology_manager.go:215] "Topology Admit Handler" podUID="6a13fa54-a237-4a9b-9bb0-12791e23176f" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:41:21.591872 systemd[1]: Created slice kubepods-besteffort-pod6a13fa54_a237_4a9b_9bb0_12791e23176f.slice - libcontainer container kubepods-besteffort-pod6a13fa54_a237_4a9b_9bb0_12791e23176f.slice. Feb 13 15:41:21.718243 kubelet[2557]: I0213 15:41:21.718107 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7sx\" (UniqueName: \"kubernetes.io/projected/6a13fa54-a237-4a9b-9bb0-12791e23176f-kube-api-access-qj7sx\") pod \"nfs-server-provisioner-0\" (UID: \"6a13fa54-a237-4a9b-9bb0-12791e23176f\") " pod="default/nfs-server-provisioner-0" Feb 13 15:41:21.718243 kubelet[2557]: I0213 15:41:21.718181 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6a13fa54-a237-4a9b-9bb0-12791e23176f-data\") pod \"nfs-server-provisioner-0\" (UID: \"6a13fa54-a237-4a9b-9bb0-12791e23176f\") " pod="default/nfs-server-provisioner-0" Feb 13 15:41:21.895850 containerd[1726]: time="2025-02-13T15:41:21.895795225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6a13fa54-a237-4a9b-9bb0-12791e23176f,Namespace:default,Attempt:0,}" Feb 13 15:41:22.058490 systemd-networkd[1476]: cali60e51b789ff: Link UP Feb 13 15:41:22.059618 systemd-networkd[1476]: cali60e51b789ff: Gained carrier Feb 13 15:41:22.062926 kubelet[2557]: E0213 15:41:22.061571 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:21.989 [INFO][4082] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.18-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6a13fa54-a237-4a9b-9bb0-12791e23176f 1373 0 2025-02-13 15:41:21 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.18 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:21.989 [INFO][4082] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.015 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" HandleID="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Workload="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.026 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" HandleID="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Workload="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002914b0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.18", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:41:22.01512343 +0000 UTC"}, Hostname:"10.200.8.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.026 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.026 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.026 [INFO][4092] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.18' Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.028 [INFO][4092] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.031 [INFO][4092] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.035 [INFO][4092] ipam/ipam.go 489: Trying affinity for 192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.036 [INFO][4092] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.038 [INFO][4092] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.038 [INFO][4092] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.039 [INFO][4092] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.044 [INFO][4092] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.053 [INFO][4092] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.131/26] block=192.168.60.128/26 handle="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.053 [INFO][4092] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.131/26] handle="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" host="10.200.8.18" Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.053 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:41:22.071053 containerd[1726]: 2025-02-13 15:41:22.053 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.131/26] IPv6=[] ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" HandleID="k8s-pod-network.67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Workload="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.072057 containerd[1726]: 2025-02-13 15:41:22.054 [INFO][4082] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6a13fa54-a237-4a9b-9bb0-12791e23176f", ResourceVersion:"1373", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:22.072057 containerd[1726]: 2025-02-13 15:41:22.054 [INFO][4082] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.131/32] ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.072057 containerd[1726]: 2025-02-13 15:41:22.055 [INFO][4082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.072057 containerd[1726]: 2025-02-13 15:41:22.058 [INFO][4082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.072362 containerd[1726]: 2025-02-13 15:41:22.058 [INFO][4082] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6a13fa54-a237-4a9b-9bb0-12791e23176f", ResourceVersion:"1373", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"7e:e3:65:fd:92:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:22.072362 containerd[1726]: 2025-02-13 15:41:22.069 [INFO][4082] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.18-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:41:22.102735 containerd[1726]: time="2025-02-13T15:41:22.102174460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:22.102735 containerd[1726]: time="2025-02-13T15:41:22.102226262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:22.102735 containerd[1726]: time="2025-02-13T15:41:22.102244962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.102735 containerd[1726]: time="2025-02-13T15:41:22.102329665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.128076 systemd[1]: Started cri-containerd-67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a.scope - libcontainer container 67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a. Feb 13 15:41:22.167553 containerd[1726]: time="2025-02-13T15:41:22.167390430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6a13fa54-a237-4a9b-9bb0-12791e23176f,Namespace:default,Attempt:0,} returns sandbox id \"67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a\"" Feb 13 15:41:22.169248 containerd[1726]: time="2025-02-13T15:41:22.169088782Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:41:23.062728 kubelet[2557]: E0213 15:41:23.062554 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:23.202167 systemd-networkd[1476]: cali60e51b789ff: Gained IPv6LL Feb 13 15:41:24.063062 kubelet[2557]: E0213 15:41:24.063014 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:24.651602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108888841.mount: Deactivated successfully. Feb 13 15:41:25.064186 kubelet[2557]: E0213 15:41:25.064030 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:26.064284 kubelet[2557]: E0213 15:41:26.064174 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:27.064527 kubelet[2557]: E0213 15:41:27.064457 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:28.064754 kubelet[2557]: E0213 15:41:28.064707 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:28.109523 containerd[1726]: time="2025-02-13T15:41:28.109459903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:28.112755 containerd[1726]: time="2025-02-13T15:41:28.112686098Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 15:41:28.115927 containerd[1726]: time="2025-02-13T15:41:28.115860191Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:28.121401 containerd[1726]: time="2025-02-13T15:41:28.121345653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:28.125935 containerd[1726]: time="2025-02-13T15:41:28.124164136Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.955033852s" Feb 13 15:41:28.125935 containerd[1726]: time="2025-02-13T15:41:28.124208037Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:41:28.130420 containerd[1726]: time="2025-02-13T15:41:28.130388419Z" level=info msg="CreateContainer within sandbox \"67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:41:28.165256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959779326.mount: Deactivated successfully. Feb 13 15:41:28.174212 containerd[1726]: time="2025-02-13T15:41:28.174172706Z" level=info msg="CreateContainer within sandbox \"67a929592db5c6ff79e4df6ccd74ff915e2f5eb8dfada8fcae2f3b735ae8394a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d914a75f63701775d32fca6fba3c75c3d127765b54f65a23265f2fbea3bef96d\"" Feb 13 15:41:28.174736 containerd[1726]: time="2025-02-13T15:41:28.174691521Z" level=info msg="StartContainer for \"d914a75f63701775d32fca6fba3c75c3d127765b54f65a23265f2fbea3bef96d\"" Feb 13 15:41:28.212069 systemd[1]: Started cri-containerd-d914a75f63701775d32fca6fba3c75c3d127765b54f65a23265f2fbea3bef96d.scope - libcontainer container d914a75f63701775d32fca6fba3c75c3d127765b54f65a23265f2fbea3bef96d. Feb 13 15:41:28.242000 containerd[1726]: time="2025-02-13T15:41:28.241936198Z" level=info msg="StartContainer for \"d914a75f63701775d32fca6fba3c75c3d127765b54f65a23265f2fbea3bef96d\" returns successfully" Feb 13 15:41:29.065927 kubelet[2557]: E0213 15:41:29.065856 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:30.066723 kubelet[2557]: E0213 15:41:30.066662 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:31.067268 kubelet[2557]: E0213 15:41:31.067189 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:32.068408 kubelet[2557]: E0213 15:41:32.068352 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:33.068966 kubelet[2557]: E0213 15:41:33.068913 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:34.069854 kubelet[2557]: E0213 15:41:34.069785 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:35.030875 kubelet[2557]: E0213 15:41:35.030813 2557 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:35.053452 containerd[1726]: time="2025-02-13T15:41:35.053228521Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:35.053452 containerd[1726]: time="2025-02-13T15:41:35.053369225Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:35.053452 containerd[1726]: time="2025-02-13T15:41:35.053389526Z" level=info msg="StopPodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:35.054188 containerd[1726]: time="2025-02-13T15:41:35.053887140Z" level=info msg="RemovePodSandbox for \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:35.054188 containerd[1726]: time="2025-02-13T15:41:35.053935141Z" level=info msg="Forcibly stopping sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\"" Feb 13 15:41:35.054188 containerd[1726]: time="2025-02-13T15:41:35.054012943Z" level=info msg="TearDown network for sandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" successfully" Feb 13 15:41:35.070090 kubelet[2557]: E0213 15:41:35.070041 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:35.586554 systemd[1]: run-containerd-runc-k8s.io-c54cae94eb82d9da0b9e766d0325fc3e4c0cf84dde82a2e8952421535d8ac856-runc.HIcbd6.mount: Deactivated successfully. Feb 13 15:41:35.649298 kubelet[2557]: I0213 15:41:35.649252 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=8.691725925 podStartE2EDuration="14.64921365s" podCreationTimestamp="2025-02-13 15:41:21 +0000 UTC" firstStartedPulling="2025-02-13 15:41:22.168773572 +0000 UTC m=+48.074745722" lastFinishedPulling="2025-02-13 15:41:28.126261397 +0000 UTC m=+54.032233447" observedRunningTime="2025-02-13 15:41:28.308413752 +0000 UTC m=+54.214385802" watchObservedRunningTime="2025-02-13 15:41:35.64921365 +0000 UTC m=+61.555185700" Feb 13 15:41:36.070655 kubelet[2557]: E0213 15:41:36.070599 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:37.071584 kubelet[2557]: E0213 15:41:37.071517 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:38.072132 kubelet[2557]: E0213 15:41:38.072080 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:39.072798 kubelet[2557]: E0213 15:41:39.072730 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:40.058303 containerd[1726]: time="2025-02-13T15:41:40.057244600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:40.058303 containerd[1726]: time="2025-02-13T15:41:40.057354503Z" level=info msg="RemovePodSandbox \"89f9fc3bd733659c7a8e06cda2fe6a7595d7b57540b2b8a0faf838ba9ca346f1\" returns successfully" Feb 13 15:41:40.059530 containerd[1726]: time="2025-02-13T15:41:40.059483464Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:40.059677 containerd[1726]: time="2025-02-13T15:41:40.059631968Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:40.059677 containerd[1726]: time="2025-02-13T15:41:40.059650469Z" level=info msg="StopPodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:40.060201 containerd[1726]: time="2025-02-13T15:41:40.060081181Z" level=info msg="RemovePodSandbox for \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:40.060201 containerd[1726]: time="2025-02-13T15:41:40.060122682Z" level=info msg="Forcibly stopping sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\"" Feb 13 15:41:40.060353 containerd[1726]: time="2025-02-13T15:41:40.060245386Z" level=info msg="TearDown network for sandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" successfully" Feb 13 15:41:40.073434 kubelet[2557]: E0213 15:41:40.073380 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:41.074288 kubelet[2557]: E0213 15:41:41.074219 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:42.075506 kubelet[2557]: E0213 15:41:42.075448 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:43.076232 kubelet[2557]: E0213 15:41:43.076159 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:44.077239 kubelet[2557]: E0213 15:41:44.077174 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:45.078257 kubelet[2557]: E0213 15:41:45.078192 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:46.552332 kubelet[2557]: E0213 15:41:46.078720 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:46.667095 containerd[1726]: time="2025-02-13T15:41:46.667029757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.667119960Z" level=info msg="RemovePodSandbox \"d29571d857067c9fc3697e902e89f5d4da4431f717089b6929c4d9d8fff1dd4a\" returns successfully" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.667728577Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.667881482Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.667924383Z" level=info msg="StopPodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.668340495Z" level=info msg="RemovePodSandbox for \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.668377996Z" level=info msg="Forcibly stopping sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\"" Feb 13 15:41:46.670971 containerd[1726]: time="2025-02-13T15:41:46.668474098Z" level=info msg="TearDown network for sandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" successfully" Feb 13 15:41:46.808645 containerd[1726]: time="2025-02-13T15:41:46.808120386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:46.808645 containerd[1726]: time="2025-02-13T15:41:46.808204589Z" level=info msg="RemovePodSandbox \"e69e36c0752aa07c32c7ac62162ccba8d5f61549c6c215ac8e9c1521fd2332a6\" returns successfully" Feb 13 15:41:46.808860 containerd[1726]: time="2025-02-13T15:41:46.808832607Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:46.809139 containerd[1726]: time="2025-02-13T15:41:46.808981811Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:46.809139 containerd[1726]: time="2025-02-13T15:41:46.809004112Z" level=info msg="StopPodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:46.809645 containerd[1726]: time="2025-02-13T15:41:46.809605429Z" level=info msg="RemovePodSandbox for \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:46.809645 containerd[1726]: time="2025-02-13T15:41:46.809641030Z" level=info msg="Forcibly stopping sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\"" Feb 13 15:41:46.809839 containerd[1726]: time="2025-02-13T15:41:46.809726432Z" level=info msg="TearDown network for sandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" successfully" Feb 13 15:41:46.920381 containerd[1726]: time="2025-02-13T15:41:46.920322791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:46.920611 containerd[1726]: time="2025-02-13T15:41:46.920400693Z" level=info msg="RemovePodSandbox \"4db848cbb2c36526fe56ed6232563eb5cdb34090b7cfea589e2627faed3357c0\" returns successfully" Feb 13 15:41:46.921031 containerd[1726]: time="2025-02-13T15:41:46.920976509Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:46.921153 containerd[1726]: time="2025-02-13T15:41:46.921107313Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:46.921153 containerd[1726]: time="2025-02-13T15:41:46.921123413Z" level=info msg="StopPodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:46.921487 containerd[1726]: time="2025-02-13T15:41:46.921455323Z" level=info msg="RemovePodSandbox for \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:46.921580 containerd[1726]: time="2025-02-13T15:41:46.921487324Z" level=info msg="Forcibly stopping sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\"" Feb 13 15:41:46.921626 containerd[1726]: time="2025-02-13T15:41:46.921567126Z" level=info msg="TearDown network for sandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" successfully" Feb 13 15:41:47.015456 containerd[1726]: time="2025-02-13T15:41:47.015375605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.015456 containerd[1726]: time="2025-02-13T15:41:47.015464308Z" level=info msg="RemovePodSandbox \"87ee84f02a77ac494e56adb31d8cdfa252cb0f3f57cf8f8805f4ea115ac63cb2\" returns successfully" Feb 13 15:41:47.016146 containerd[1726]: time="2025-02-13T15:41:47.016079525Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:47.016387 containerd[1726]: time="2025-02-13T15:41:47.016217829Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:47.016387 containerd[1726]: time="2025-02-13T15:41:47.016236330Z" level=info msg="StopPodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:47.016931 containerd[1726]: time="2025-02-13T15:41:47.016803046Z" level=info msg="RemovePodSandbox for \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:47.016931 containerd[1726]: time="2025-02-13T15:41:47.016838947Z" level=info msg="Forcibly stopping sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\"" Feb 13 15:41:47.017087 containerd[1726]: time="2025-02-13T15:41:47.016959950Z" level=info msg="TearDown network for sandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" successfully" Feb 13 15:41:47.079855 kubelet[2557]: E0213 15:41:47.079699 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:47.166401 containerd[1726]: time="2025-02-13T15:41:47.166326816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.166636 containerd[1726]: time="2025-02-13T15:41:47.166430319Z" level=info msg="RemovePodSandbox \"adceba5ac803bf23495ef3d8e13dd3bbe650ef176fdce38e180e4324bbce9379\" returns successfully" Feb 13 15:41:47.167293 containerd[1726]: time="2025-02-13T15:41:47.167251042Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:47.167445 containerd[1726]: time="2025-02-13T15:41:47.167411547Z" level=info msg="TearDown network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" successfully" Feb 13 15:41:47.167445 containerd[1726]: time="2025-02-13T15:41:47.167431747Z" level=info msg="StopPodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" returns successfully" Feb 13 15:41:47.168088 containerd[1726]: time="2025-02-13T15:41:47.168002164Z" level=info msg="RemovePodSandbox for \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:47.168088 containerd[1726]: time="2025-02-13T15:41:47.168040865Z" level=info msg="Forcibly stopping sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\"" Feb 13 15:41:47.168302 containerd[1726]: time="2025-02-13T15:41:47.168149868Z" level=info msg="TearDown network for sandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" successfully" Feb 13 15:41:47.208244 containerd[1726]: time="2025-02-13T15:41:47.208178411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.208471 containerd[1726]: time="2025-02-13T15:41:47.208260413Z" level=info msg="RemovePodSandbox \"49e33464cdd4999a50e0c49b0abddad6c279cc3dadf3161fcea9b7a42b9d2bb5\" returns successfully" Feb 13 15:41:47.208941 containerd[1726]: time="2025-02-13T15:41:47.208879931Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" Feb 13 15:41:47.209080 containerd[1726]: time="2025-02-13T15:41:47.209039135Z" level=info msg="TearDown network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" successfully" Feb 13 15:41:47.209080 containerd[1726]: time="2025-02-13T15:41:47.209060036Z" level=info msg="StopPodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" returns successfully" Feb 13 15:41:47.209567 containerd[1726]: time="2025-02-13T15:41:47.209539450Z" level=info msg="RemovePodSandbox for \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" Feb 13 15:41:47.209666 containerd[1726]: time="2025-02-13T15:41:47.209571251Z" level=info msg="Forcibly stopping sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\"" Feb 13 15:41:47.209722 containerd[1726]: time="2025-02-13T15:41:47.209665053Z" level=info msg="TearDown network for sandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" successfully" Feb 13 15:41:47.312384 containerd[1726]: time="2025-02-13T15:41:47.312297484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.312384 containerd[1726]: time="2025-02-13T15:41:47.312380287Z" level=info msg="RemovePodSandbox \"d9028de7d231a569fb486041fb33e41c48a39ad814f2041bb47695941c7f8a21\" returns successfully" Feb 13 15:41:47.313071 containerd[1726]: time="2025-02-13T15:41:47.313027005Z" level=info msg="StopPodSandbox for \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\"" Feb 13 15:41:47.313213 containerd[1726]: time="2025-02-13T15:41:47.313185010Z" level=info msg="TearDown network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" successfully" Feb 13 15:41:47.313213 containerd[1726]: time="2025-02-13T15:41:47.313204810Z" level=info msg="StopPodSandbox for \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" returns successfully" Feb 13 15:41:47.313685 containerd[1726]: time="2025-02-13T15:41:47.313643023Z" level=info msg="RemovePodSandbox for \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\"" Feb 13 15:41:47.313784 containerd[1726]: time="2025-02-13T15:41:47.313684324Z" level=info msg="Forcibly stopping sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\"" Feb 13 15:41:47.313837 containerd[1726]: time="2025-02-13T15:41:47.313784727Z" level=info msg="TearDown network for sandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" successfully" Feb 13 15:41:47.405177 containerd[1726]: time="2025-02-13T15:41:47.405060033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.405177 containerd[1726]: time="2025-02-13T15:41:47.405156036Z" level=info msg="RemovePodSandbox \"62fd42efe84f9c77ec31e6e39eb5d48a2633b2a29bcfdc588f6be63ee4d6d998\" returns successfully" Feb 13 15:41:47.405798 containerd[1726]: time="2025-02-13T15:41:47.405702752Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:47.406026 containerd[1726]: time="2025-02-13T15:41:47.405849656Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:47.406026 containerd[1726]: time="2025-02-13T15:41:47.405871256Z" level=info msg="StopPodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:47.406403 containerd[1726]: time="2025-02-13T15:41:47.406375371Z" level=info msg="RemovePodSandbox for \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:47.406525 containerd[1726]: time="2025-02-13T15:41:47.406412672Z" level=info msg="Forcibly stopping sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\"" Feb 13 15:41:47.406613 containerd[1726]: time="2025-02-13T15:41:47.406516275Z" level=info msg="TearDown network for sandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" successfully" Feb 13 15:41:47.473634 containerd[1726]: time="2025-02-13T15:41:47.473558489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.473634 containerd[1726]: time="2025-02-13T15:41:47.473639992Z" level=info msg="RemovePodSandbox \"a97fa1c0393cf22098b6ae32ac7fb7be27cc0fdae3e920041710cc0d373db358\" returns successfully" Feb 13 15:41:47.474174 containerd[1726]: time="2025-02-13T15:41:47.474146906Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:47.474288 containerd[1726]: time="2025-02-13T15:41:47.474265210Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:47.474334 containerd[1726]: time="2025-02-13T15:41:47.474287410Z" level=info msg="StopPodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:47.474674 containerd[1726]: time="2025-02-13T15:41:47.474646921Z" level=info msg="RemovePodSandbox for \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:47.474752 containerd[1726]: time="2025-02-13T15:41:47.474676121Z" level=info msg="Forcibly stopping sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\"" Feb 13 15:41:47.474796 containerd[1726]: time="2025-02-13T15:41:47.474750923Z" level=info msg="TearDown network for sandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" successfully" Feb 13 15:41:47.567466 containerd[1726]: time="2025-02-13T15:41:47.567409470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.567687 containerd[1726]: time="2025-02-13T15:41:47.567487572Z" level=info msg="RemovePodSandbox \"1fcf3f458340331ad81c40752bd40510c48a430ee36599c160d48bc8727fa53f\" returns successfully" Feb 13 15:41:47.568113 containerd[1726]: time="2025-02-13T15:41:47.568039388Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:47.568215 containerd[1726]: time="2025-02-13T15:41:47.568190092Z" level=info msg="TearDown network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" successfully" Feb 13 15:41:47.568215 containerd[1726]: time="2025-02-13T15:41:47.568207392Z" level=info msg="StopPodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" returns successfully" Feb 13 15:41:47.568575 containerd[1726]: time="2025-02-13T15:41:47.568538802Z" level=info msg="RemovePodSandbox for \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:47.568575 containerd[1726]: time="2025-02-13T15:41:47.568570103Z" level=info msg="Forcibly stopping sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\"" Feb 13 15:41:47.568714 containerd[1726]: time="2025-02-13T15:41:47.568651105Z" level=info msg="TearDown network for sandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" successfully" Feb 13 15:41:47.657231 containerd[1726]: time="2025-02-13T15:41:47.657053330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.657231 containerd[1726]: time="2025-02-13T15:41:47.657143632Z" level=info msg="RemovePodSandbox \"d3e6b5691a9d6ba720c29b6154741b067902ed13f231fe5b1e03ef51e7207925\" returns successfully" Feb 13 15:41:47.658062 containerd[1726]: time="2025-02-13T15:41:47.657748549Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" Feb 13 15:41:47.658062 containerd[1726]: time="2025-02-13T15:41:47.657931155Z" level=info msg="TearDown network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" successfully" Feb 13 15:41:47.658062 containerd[1726]: time="2025-02-13T15:41:47.657954555Z" level=info msg="StopPodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" returns successfully" Feb 13 15:41:47.658662 containerd[1726]: time="2025-02-13T15:41:47.658632875Z" level=info msg="RemovePodSandbox for \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" Feb 13 15:41:47.658752 containerd[1726]: time="2025-02-13T15:41:47.658719377Z" level=info msg="Forcibly stopping sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\"" Feb 13 15:41:47.659479 containerd[1726]: time="2025-02-13T15:41:47.658813880Z" level=info msg="TearDown network for sandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" successfully" Feb 13 15:41:47.705846 containerd[1726]: time="2025-02-13T15:41:47.705768721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.706446 containerd[1726]: time="2025-02-13T15:41:47.705866423Z" level=info msg="RemovePodSandbox \"b2366778f8448cfcc6b63df59e400825b1f77a1ed6892a277e9ae34c4df74741\" returns successfully" Feb 13 15:41:47.706568 containerd[1726]: time="2025-02-13T15:41:47.706495941Z" level=info msg="StopPodSandbox for \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\"" Feb 13 15:41:47.706689 containerd[1726]: time="2025-02-13T15:41:47.706631045Z" level=info msg="TearDown network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" successfully" Feb 13 15:41:47.706689 containerd[1726]: time="2025-02-13T15:41:47.706650846Z" level=info msg="StopPodSandbox for \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" returns successfully" Feb 13 15:41:47.707144 containerd[1726]: time="2025-02-13T15:41:47.707114759Z" level=info msg="RemovePodSandbox for \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\"" Feb 13 15:41:47.707261 containerd[1726]: time="2025-02-13T15:41:47.707152260Z" level=info msg="Forcibly stopping sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\"" Feb 13 15:41:47.707339 containerd[1726]: time="2025-02-13T15:41:47.707255863Z" level=info msg="TearDown network for sandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" successfully" Feb 13 15:41:47.816219 containerd[1726]: time="2025-02-13T15:41:47.816142573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.816506 containerd[1726]: time="2025-02-13T15:41:47.816238375Z" level=info msg="RemovePodSandbox \"2b1b2079aee1ad52e44c9dda52d2e5631dbf5236ef32cd53b85c99e379664696\" returns successfully" Feb 13 15:41:48.080251 kubelet[2557]: E0213 15:41:48.080088 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:49.081065 kubelet[2557]: E0213 15:41:49.080983 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:50.081331 kubelet[2557]: E0213 15:41:50.081262 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:51.082028 kubelet[2557]: E0213 15:41:51.081935 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:52.082262 kubelet[2557]: E0213 15:41:52.082201 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:52.976577 kubelet[2557]: I0213 15:41:52.976513 2557 topology_manager.go:215] "Topology Admit Handler" podUID="6ce242f9-cdc2-44e8-b967-8e8b71130871" podNamespace="default" podName="test-pod-1" Feb 13 15:41:52.983305 systemd[1]: Created slice kubepods-besteffort-pod6ce242f9_cdc2_44e8_b967_8e8b71130871.slice - libcontainer container kubepods-besteffort-pod6ce242f9_cdc2_44e8_b967_8e8b71130871.slice. Feb 13 15:41:53.082685 kubelet[2557]: E0213 15:41:53.082621 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:53.102711 kubelet[2557]: I0213 15:41:53.102673 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-98695d11-efce-4ef4-a3c7-afa6a57f715a\" (UniqueName: \"kubernetes.io/nfs/6ce242f9-cdc2-44e8-b967-8e8b71130871-pvc-98695d11-efce-4ef4-a3c7-afa6a57f715a\") pod \"test-pod-1\" (UID: \"6ce242f9-cdc2-44e8-b967-8e8b71130871\") " pod="default/test-pod-1" Feb 13 15:41:53.102889 kubelet[2557]: I0213 15:41:53.102742 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncll\" (UniqueName: \"kubernetes.io/projected/6ce242f9-cdc2-44e8-b967-8e8b71130871-kube-api-access-2ncll\") pod \"test-pod-1\" (UID: \"6ce242f9-cdc2-44e8-b967-8e8b71130871\") " pod="default/test-pod-1" Feb 13 15:41:53.253942 kernel: FS-Cache: Loaded Feb 13 15:41:53.325934 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:41:53.326067 kernel: RPC: Registered udp transport module. Feb 13 15:41:53.326090 kernel: RPC: Registered tcp transport module. Feb 13 15:41:53.329482 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:41:53.329554 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:41:53.618301 kernel: NFS: Registering the id_resolver key type Feb 13 15:41:53.618496 kernel: Key type id_resolver registered Feb 13 15:41:53.618539 kernel: Key type id_legacy registered Feb 13 15:41:53.681560 nfsidmap[4317]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-a-a4d4c6cb32' Feb 13 15:41:53.688442 nfsidmap[4318]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-a-a4d4c6cb32' Feb 13 15:41:53.887435 containerd[1726]: time="2025-02-13T15:41:53.887363563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6ce242f9-cdc2-44e8-b967-8e8b71130871,Namespace:default,Attempt:0,}" Feb 13 15:41:54.035888 systemd-networkd[1476]: cali5ec59c6bf6e: Link UP Feb 13 15:41:54.036202 systemd-networkd[1476]: cali5ec59c6bf6e: Gained carrier Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:53.969 [INFO][4320] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.18-k8s-test--pod--1-eth0 default 6ce242f9-cdc2-44e8-b967-8e8b71130871 1476 0 2025-02-13 15:41:22 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.18 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:53.969 [INFO][4320] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:53.994 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" HandleID="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Workload="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.004 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" HandleID="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Workload="10.200.8.18-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312b50), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.18", "pod":"test-pod-1", "timestamp":"2025-02-13 15:41:53.994027826 +0000 UTC"}, Hostname:"10.200.8.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.004 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.005 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.005 [INFO][4331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.18' Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.006 [INFO][4331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.009 [INFO][4331] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.012 [INFO][4331] ipam/ipam.go 489: Trying affinity for 192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.014 [INFO][4331] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.016 [INFO][4331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.016 [INFO][4331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.017 [INFO][4331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3 Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.022 [INFO][4331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.030 [INFO][4331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.132/26] block=192.168.60.128/26 handle="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.030 [INFO][4331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.132/26] handle="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" host="10.200.8.18" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.030 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.030 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.132/26] IPv6=[] ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" HandleID="k8s-pod-network.0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Workload="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.049990 containerd[1726]: 2025-02-13 15:41:54.032 [INFO][4320] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6ce242f9-cdc2-44e8-b967-8e8b71130871", ResourceVersion:"1476", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:54.052425 containerd[1726]: 2025-02-13 15:41:54.032 [INFO][4320] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.132/32] ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.052425 containerd[1726]: 2025-02-13 15:41:54.032 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.052425 containerd[1726]: 2025-02-13 15:41:54.036 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.052425 containerd[1726]: 2025-02-13 15:41:54.037 [INFO][4320] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.18-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6ce242f9-cdc2-44e8-b967-8e8b71130871", ResourceVersion:"1476", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.18", ContainerID:"0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:47:2e:a1:04:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:41:54.052425 containerd[1726]: 2025-02-13 15:41:54.048 [INFO][4320] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.18-k8s-test--pod--1-eth0" Feb 13 15:41:54.080457 containerd[1726]: time="2025-02-13T15:41:54.080214400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:54.080457 containerd[1726]: time="2025-02-13T15:41:54.080289002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:54.080457 containerd[1726]: time="2025-02-13T15:41:54.080330203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:54.081789 containerd[1726]: time="2025-02-13T15:41:54.080423706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:54.083084 kubelet[2557]: E0213 15:41:54.083053 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:54.102101 systemd[1]: Started cri-containerd-0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3.scope - libcontainer container 0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3. Feb 13 15:41:54.141531 containerd[1726]: time="2025-02-13T15:41:54.141495759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6ce242f9-cdc2-44e8-b967-8e8b71130871,Namespace:default,Attempt:0,} returns sandbox id \"0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3\"" Feb 13 15:41:54.143623 containerd[1726]: time="2025-02-13T15:41:54.143591920Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:41:54.766529 containerd[1726]: time="2025-02-13T15:41:54.766472802Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:54.768668 containerd[1726]: time="2025-02-13T15:41:54.768601163Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:41:54.771126 containerd[1726]: time="2025-02-13T15:41:54.771093235Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 627.467415ms" Feb 13 15:41:54.771126 containerd[1726]: time="2025-02-13T15:41:54.771125636Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:41:54.772884 containerd[1726]: time="2025-02-13T15:41:54.772859386Z" level=info msg="CreateContainer within sandbox \"0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:41:54.816576 containerd[1726]: time="2025-02-13T15:41:54.816534140Z" level=info msg="CreateContainer within sandbox \"0f14f06933edaaf5d4c48273343c8facd902b72d93ab455c0ea82e615bc17cc3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"18e39aad22de7cabffa0537b3ae34d089087e4099af11deaec9ec1b5c96e822e\"" Feb 13 15:41:54.817326 containerd[1726]: time="2025-02-13T15:41:54.817291761Z" level=info msg="StartContainer for \"18e39aad22de7cabffa0537b3ae34d089087e4099af11deaec9ec1b5c96e822e\"" Feb 13 15:41:54.852032 systemd[1]: Started cri-containerd-18e39aad22de7cabffa0537b3ae34d089087e4099af11deaec9ec1b5c96e822e.scope - libcontainer container 18e39aad22de7cabffa0537b3ae34d089087e4099af11deaec9ec1b5c96e822e. Feb 13 15:41:54.877630 containerd[1726]: time="2025-02-13T15:41:54.877527591Z" level=info msg="StartContainer for \"18e39aad22de7cabffa0537b3ae34d089087e4099af11deaec9ec1b5c96e822e\" returns successfully" Feb 13 15:41:55.030029 kubelet[2557]: E0213 15:41:55.029835 2557 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:55.083407 kubelet[2557]: E0213 15:41:55.083283 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:55.202192 systemd-networkd[1476]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:41:55.386303 kubelet[2557]: I0213 15:41:55.386256 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.757797853 podStartE2EDuration="33.386208095s" podCreationTimestamp="2025-02-13 15:41:22 +0000 UTC" firstStartedPulling="2025-02-13 15:41:54.142979602 +0000 UTC m=+80.048951652" lastFinishedPulling="2025-02-13 15:41:54.771389844 +0000 UTC m=+80.677361894" observedRunningTime="2025-02-13 15:41:55.385883486 +0000 UTC m=+81.291855636" watchObservedRunningTime="2025-02-13 15:41:55.386208095 +0000 UTC m=+81.292180245" Feb 13 15:41:56.084166 kubelet[2557]: E0213 15:41:56.084102 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:57.085006 kubelet[2557]: E0213 15:41:57.084966 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:58.085589 kubelet[2557]: E0213 15:41:58.085518 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:41:59.086404 kubelet[2557]: E0213 15:41:59.086362 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:42:00.087368 kubelet[2557]: E0213 15:42:00.087304 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:42:01.087732 kubelet[2557]: E0213 15:42:01.087689 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:42:02.088074 kubelet[2557]: E0213 15:42:02.088009 2557 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"