Jan 14 13:21:20.098762 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:21:20.098799 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.098813 kernel: BIOS-provided physical RAM map: Jan 14 13:21:20.098824 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:21:20.098834 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:21:20.098857 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:21:20.098871 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:21:20.098886 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:21:20.098897 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:21:20.098909 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:21:20.098921 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:21:20.098932 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:21:20.098943 kernel: NX (Execute Disable) protection: active Jan 14 13:21:20.098955 kernel: APIC: Static calls initialized Jan 14 13:21:20.098973 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:21:20.098997 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 RNG=0x3ffd1018 Jan 14 13:21:20.099009 kernel: random: crng init done Jan 14 13:21:20.099022 kernel: secureboot: Secure boot disabled Jan 14 13:21:20.099034 kernel: SMBIOS 3.1.0 present. Jan 14 13:21:20.099047 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:21:20.099059 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:21:20.099071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:21:20.099084 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:21:20.099096 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:21:20.099110 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:21:20.099123 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:21:20.099152 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.099165 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.099179 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:21:20.099192 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:21:20.099205 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:21:20.099219 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:21:20.099232 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.099247 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:21:20.099260 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:21:20.099277 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:21:20.099289 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.099302 kernel: Using GB pages for direct mapping Jan 14 13:21:20.099315 kernel: ACPI: Early table checksum verification disabled Jan 14 13:21:20.099329 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:21:20.099347 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099364 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099378 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:21:20.099391 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:21:20.099405 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099420 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099433 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099449 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099464 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099477 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099492 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099506 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:21:20.099520 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:21:20.099534 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:21:20.099548 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:21:20.099562 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:21:20.099578 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:21:20.099592 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:21:20.099606 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:21:20.099620 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:21:20.099634 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:21:20.099648 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:21:20.099662 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:21:20.099676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:21:20.099692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:21:20.099706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:21:20.099720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:21:20.099734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:21:20.099748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:21:20.099762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:21:20.099775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:21:20.099789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:21:20.099804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:21:20.099820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:21:20.099834 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:21:20.101417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:21:20.101435 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:21:20.101445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:21:20.101455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:21:20.101464 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:21:20.101473 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:21:20.101482 kernel: Zone ranges: Jan 14 13:21:20.101497 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:21:20.101505 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:21:20.101514 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.101523 kernel: Movable zone start for each node Jan 14 13:21:20.101530 kernel: Early memory node ranges Jan 14 13:21:20.101541 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:21:20.101548 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:21:20.101558 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:21:20.101566 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.101578 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:21:20.101586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:21:20.101595 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:21:20.101604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:21:20.101612 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:21:20.101622 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:21:20.101630 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:21:20.101640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:21:20.101648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:21:20.101660 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:21:20.101668 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:21:20.101677 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:21:20.101686 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:21:20.101694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:21:20.101705 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:21:20.101713 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:21:20.101723 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:21:20.101731 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:21:20.101743 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:21:20.101750 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:21:20.101761 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.101770 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:21:20.101780 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:21:20.101790 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:21:20.101799 kernel: Fallback order for Node 0: 0 Jan 14 13:21:20.101808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:21:20.101820 kernel: Policy zone: Normal Jan 14 13:21:20.101836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:21:20.101858 kernel: software IO TLB: area num 2. Jan 14 13:21:20.101873 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:21:20.101881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:21:20.101892 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:21:20.101900 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:21:20.101911 kernel: Dynamic Preempt: voluntary Jan 14 13:21:20.101919 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:21:20.101931 kernel: rcu: RCU event tracing is enabled. Jan 14 13:21:20.101940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:21:20.101953 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:21:20.101962 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:21:20.101973 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:21:20.101981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:21:20.101992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:21:20.102002 kernel: Using NULL legacy PIC Jan 14 13:21:20.102013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:21:20.102021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:21:20.102032 kernel: Console: colour dummy device 80x25 Jan 14 13:21:20.102041 kernel: printk: console [tty1] enabled Jan 14 13:21:20.102050 kernel: printk: console [ttyS0] enabled Jan 14 13:21:20.102060 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:21:20.102069 kernel: ACPI: Core revision 20230628 Jan 14 13:21:20.102079 kernel: Failed to register legacy timer interrupt Jan 14 13:21:20.102087 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:21:20.102100 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:21:20.102108 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:21:20.102119 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:21:20.102127 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:21:20.102138 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:21:20.102147 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:21:20.102157 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:21:20.102168 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:21:20.102176 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:21:20.102186 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:21:20.102194 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:21:20.102202 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:21:20.102210 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:21:20.102217 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:21:20.102225 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:21:20.102233 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:21:20.102241 kernel: RETBleed: Vulnerable Jan 14 13:21:20.102249 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:21:20.102256 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.102266 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.102276 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:21:20.102285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:21:20.102295 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:21:20.102306 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:21:20.102314 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:21:20.102325 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:21:20.102333 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:21:20.102343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:21:20.102351 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:21:20.102360 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:21:20.102371 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:21:20.102380 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:21:20.102390 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:21:20.102398 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:21:20.102409 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:21:20.102417 kernel: landlock: Up and running. Jan 14 13:21:20.102427 kernel: SELinux: Initializing. Jan 14 13:21:20.102435 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.102446 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.102454 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:21:20.102465 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102475 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102486 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102494 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:21:20.102505 kernel: signal: max sigframe size: 3632 Jan 14 13:21:20.102513 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:21:20.102524 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:21:20.102532 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:21:20.102542 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:21:20.102551 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:21:20.102562 kernel: .... node #0, CPUs: #1 Jan 14 13:21:20.102572 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:21:20.102581 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:21:20.102592 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:21:20.102600 kernel: smpboot: Max logical packages: 1 Jan 14 13:21:20.102611 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:21:20.102620 kernel: devtmpfs: initialized Jan 14 13:21:20.102630 kernel: x86/mm: Memory block size: 128MB Jan 14 13:21:20.102643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:21:20.102654 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:21:20.102664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:21:20.102675 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:21:20.102683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:21:20.102693 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:21:20.102702 kernel: audit: type=2000 audit(1736860878.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:21:20.102711 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:21:20.102721 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:21:20.102732 kernel: cpuidle: using governor menu Jan 14 13:21:20.102741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:21:20.102751 kernel: dca service started, version 1.12.1 Jan 14 13:21:20.102760 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:21:20.102769 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:21:20.102779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:21:20.102788 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:21:20.102798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:21:20.102806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:21:20.102819 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:21:20.102827 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:21:20.102838 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:21:20.102852 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:21:20.102862 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:21:20.102870 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:21:20.102881 kernel: ACPI: Interpreter enabled Jan 14 13:21:20.102889 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:21:20.102900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:21:20.102913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:21:20.102923 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:21:20.102935 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:21:20.102946 kernel: iommu: Default domain type: Translated Jan 14 13:21:20.102958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:21:20.102971 kernel: efivars: Registered efivars operations Jan 14 13:21:20.102983 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:21:20.102998 kernel: PCI: System does not support PCI Jan 14 13:21:20.103011 kernel: vgaarb: loaded Jan 14 13:21:20.103030 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:21:20.103044 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:21:20.103059 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:21:20.103073 kernel: pnp: PnP ACPI init Jan 14 13:21:20.103088 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:21:20.103102 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:21:20.103116 kernel: NET: Registered PF_INET protocol family Jan 14 13:21:20.103131 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:21:20.103145 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:21:20.103162 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:21:20.103189 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:21:20.103204 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:21:20.103217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:21:20.103230 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.103243 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.103258 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:21:20.103273 kernel: NET: Registered PF_XDP protocol family Jan 14 13:21:20.103288 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:21:20.103306 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:21:20.103321 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 14 13:21:20.103335 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:21:20.103349 kernel: Initialise system trusted keyrings Jan 14 13:21:20.103364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:21:20.103379 kernel: Key type asymmetric registered Jan 14 13:21:20.103391 kernel: Asymmetric key parser 'x509' registered Jan 14 13:21:20.103405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:21:20.103418 kernel: io scheduler mq-deadline registered Jan 14 13:21:20.103435 kernel: io scheduler kyber registered Jan 14 13:21:20.103451 kernel: io scheduler bfq registered Jan 14 13:21:20.103464 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:21:20.103478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:21:20.103492 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:21:20.103506 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:21:20.103521 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:21:20.103698 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:21:20.103833 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:21:19 UTC (1736860879) Jan 14 13:21:20.106747 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:21:20.106770 kernel: intel_pstate: CPU model not supported Jan 14 13:21:20.106787 kernel: efifb: probing for efifb Jan 14 13:21:20.106802 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:21:20.106817 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:21:20.106832 kernel: efifb: scrolling: redraw Jan 14 13:21:20.106856 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:21:20.106870 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:20.106888 kernel: fb0: EFI VGA frame buffer device Jan 14 13:21:20.106902 kernel: pstore: Using crash dump compression: deflate Jan 14 13:21:20.106917 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:21:20.106930 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:21:20.106945 kernel: Segment Routing with IPv6 Jan 14 13:21:20.106961 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:21:20.106977 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:21:20.106992 kernel: Key type dns_resolver registered Jan 14 13:21:20.107007 kernel: IPI shorthand broadcast: enabled Jan 14 13:21:20.107025 kernel: sched_clock: Marking stable (910003200, 50565200)->(1198601700, -238033300) Jan 14 13:21:20.107041 kernel: registered taskstats version 1 Jan 14 13:21:20.107056 kernel: Loading compiled-in X.509 certificates Jan 14 13:21:20.107071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:21:20.107086 kernel: Key type .fscrypt registered Jan 14 13:21:20.107101 kernel: Key type fscrypt-provisioning registered Jan 14 13:21:20.107116 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:21:20.107131 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:21:20.107149 kernel: ima: No architecture policies found Jan 14 13:21:20.107164 kernel: clk: Disabling unused clocks Jan 14 13:21:20.107179 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:21:20.107195 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:21:20.107210 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:21:20.107226 kernel: Run /init as init process Jan 14 13:21:20.107241 kernel: with arguments: Jan 14 13:21:20.107256 kernel: /init Jan 14 13:21:20.107275 kernel: with environment: Jan 14 13:21:20.107294 kernel: HOME=/ Jan 14 13:21:20.107317 kernel: TERM=linux Jan 14 13:21:20.107336 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:21:20.107360 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:20.107383 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:20.107404 systemd[1]: Detected architecture x86-64. Jan 14 13:21:20.107424 systemd[1]: Running in initrd. Jan 14 13:21:20.107444 systemd[1]: No hostname configured, using default hostname. Jan 14 13:21:20.107467 systemd[1]: Hostname set to . Jan 14 13:21:20.107488 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:20.107509 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:21:20.107530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:20.107551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:20.107572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:21:20.107593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:20.107614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:21:20.107638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:21:20.107662 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:21:20.107684 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:21:20.107704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:20.107725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:20.107746 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:20.107767 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:20.107790 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:20.107811 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:20.107832 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:20.107870 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:20.107886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:21:20.107901 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:21:20.107917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:20.107932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:20.107948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:20.107968 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:20.107983 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:21:20.107999 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:20.108015 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:21:20.108030 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:21:20.108046 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:20.108062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:20.108078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:20.108117 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:21:20.108152 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:20.108168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:20.108184 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:21:20.108207 systemd-journald[177]: Journal started Jan 14 13:21:20.108252 systemd-journald[177]: Runtime Journal (/run/log/journal/b1ef8bb098f64736b1b38ea1799a5044) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:20.090923 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:21:20.112893 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:20.120121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:20.138029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:20.144068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:20.152892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:20.166834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:21:20.169957 kernel: Bridge firewalling registered Jan 14 13:21:20.170193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:20.173065 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:21:20.182103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:20.185100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:20.190359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:20.198716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:20.204694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:20.215002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:21:20.219815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:20.231978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:20.234950 dracut-cmdline[208]: dracut-dracut-053 Jan 14 13:21:20.241405 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.257066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:20.304723 systemd-resolved[222]: Positive Trust Anchors: Jan 14 13:21:20.307429 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:20.311283 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:20.330182 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 14 13:21:20.337095 kernel: SCSI subsystem initialized Jan 14 13:21:20.331273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:20.343321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:20.349945 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:21:20.360870 kernel: iscsi: registered transport (tcp) Jan 14 13:21:20.381957 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:21:20.382017 kernel: QLogic iSCSI HBA Driver Jan 14 13:21:20.417306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:20.425009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:21:20.452719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:21:20.452802 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:21:20.455773 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:21:20.494878 kernel: raid6: avx512x4 gen() 18177 MB/s Jan 14 13:21:20.513862 kernel: raid6: avx512x2 gen() 18471 MB/s Jan 14 13:21:20.532862 kernel: raid6: avx512x1 gen() 18489 MB/s Jan 14 13:21:20.551861 kernel: raid6: avx2x4 gen() 18383 MB/s Jan 14 13:21:20.570862 kernel: raid6: avx2x2 gen() 18376 MB/s Jan 14 13:21:20.590539 kernel: raid6: avx2x1 gen() 13877 MB/s Jan 14 13:21:20.590569 kernel: raid6: using algorithm avx512x1 gen() 18489 MB/s Jan 14 13:21:20.611764 kernel: raid6: .... xor() 26900 MB/s, rmw enabled Jan 14 13:21:20.611824 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:21:20.634878 kernel: xor: automatically using best checksumming function avx Jan 14 13:21:20.779874 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:21:20.789836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:20.796107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:20.814423 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:21:20.821480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:20.833005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:21:20.858428 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 14 13:21:20.886461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:20.897982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:20.942005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:20.956081 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:21:20.976794 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:20.982838 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:20.989333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:20.994863 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:21.005678 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:21:21.017065 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:21:21.048160 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:21.060891 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:21:21.067888 kernel: AES CTR mode by8 optimization enabled Jan 14 13:21:21.068831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:21.075980 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:21:21.072625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.077803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.088939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.106083 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:21:21.106115 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:21:21.089195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.094369 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.116212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.125040 kernel: PTP clock support registered Jan 14 13:21:21.127043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.798935 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:21:21.798971 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:21:21.798990 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:21:21.799007 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:21:21.799024 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:21:21.127133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.792335 systemd-resolved[222]: Clock change detected. Flushing caches. Jan 14 13:21:21.805079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.817757 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:21:21.825747 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:21:21.825777 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:21:21.830749 kernel: scsi host0: storvsc_host_t Jan 14 13:21:21.830958 kernel: scsi host1: storvsc_host_t Jan 14 13:21:21.833005 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:21:21.844396 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:21:21.844457 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:21:21.856207 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:21:21.856972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.864962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.878756 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:21:21.884756 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:21:21.891539 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:21:21.892692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.907574 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:21:21.910413 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:21:21.910436 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:21:21.921994 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:21:21.940530 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:21:21.940675 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:21:21.940871 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:21:21.941030 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:21:21.941189 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:21.941211 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:21:22.039769 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: VF slot 1 added Jan 14 13:21:22.046761 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:21:22.050816 kernel: hv_pci ad28d86f-100a-491c-80fa-373bada020f4: PCI VMBus probing: Using version 0x10004 Jan 14 13:21:22.096353 kernel: hv_pci ad28d86f-100a-491c-80fa-373bada020f4: PCI host bridge to bus 100a:00 Jan 14 13:21:22.096884 kernel: pci_bus 100a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:21:22.097086 kernel: pci_bus 100a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:21:22.097245 kernel: pci 100a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:21:22.097433 kernel: pci 100a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.097616 kernel: pci 100a:00:02.0: enabling Extended Tags Jan 14 13:21:22.097817 kernel: pci 100a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 100a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:21:22.097987 kernel: pci_bus 100a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:21:22.098133 kernel: pci 100a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.259211 kernel: mlx5_core 100a:00:02.0: enabling device (0000 -> 0002) Jan 14 13:21:22.492593 kernel: mlx5_core 100a:00:02.0: firmware version: 14.30.5000 Jan 14 13:21:22.492819 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: VF registering: eth1 Jan 14 13:21:22.493213 kernel: mlx5_core 100a:00:02.0 eth1: joined to eth0 Jan 14 13:21:22.493417 kernel: mlx5_core 100a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:21:22.502760 kernel: mlx5_core 100a:00:02.0 enP4106s1: renamed from eth1 Jan 14 13:21:22.529233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:21:22.605850 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (457) Jan 14 13:21:22.621371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:22.641283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:21:22.655750 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (441) Jan 14 13:21:22.669363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:21:22.673201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:21:22.699173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:21:22.724409 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:22.733754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.742760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.743234 disk-uuid[600]: The operation has completed successfully. Jan 14 13:21:23.830530 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:21:23.830660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:21:23.847882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:21:23.854473 sh[686]: Success Jan 14 13:21:23.883829 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:21:24.161512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:21:24.172862 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:21:24.178873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:21:24.197642 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:21:24.197718 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.200909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:21:24.203460 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:21:24.206413 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:21:24.558325 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:21:24.561709 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:21:24.572985 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:21:24.582907 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:21:24.603910 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.603962 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.603985 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:24.625002 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:24.634905 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:21:24.639611 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.647258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:21:24.656945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:21:24.677747 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:24.689891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:24.710358 systemd-networkd[870]: lo: Link UP Jan 14 13:21:24.710369 systemd-networkd[870]: lo: Gained carrier Jan 14 13:21:24.712436 systemd-networkd[870]: Enumeration completed Jan 14 13:21:24.712689 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:24.715086 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.715090 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:24.720889 systemd[1]: Reached target network.target - Network. Jan 14 13:21:24.777762 kernel: mlx5_core 100a:00:02.0 enP4106s1: Link up Jan 14 13:21:24.808081 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: Data path switched to VF: enP4106s1 Jan 14 13:21:24.808274 systemd-networkd[870]: enP4106s1: Link UP Jan 14 13:21:24.809666 systemd-networkd[870]: eth0: Link UP Jan 14 13:21:24.809882 systemd-networkd[870]: eth0: Gained carrier Jan 14 13:21:24.809895 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.818922 systemd-networkd[870]: enP4106s1: Gained carrier Jan 14 13:21:24.843789 systemd-networkd[870]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:25.790169 ignition[842]: Ignition 2.20.0 Jan 14 13:21:25.790182 ignition[842]: Stage: fetch-offline Jan 14 13:21:25.790225 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.790234 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.796990 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:25.790334 ignition[842]: parsed url from cmdline: "" Jan 14 13:21:25.790338 ignition[842]: no config URL provided Jan 14 13:21:25.790345 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.790355 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.790361 ignition[842]: failed to fetch config: resource requires networking Jan 14 13:21:25.790568 ignition[842]: Ignition finished successfully Jan 14 13:21:25.818894 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:21:25.832321 ignition[878]: Ignition 2.20.0 Jan 14 13:21:25.832331 ignition[878]: Stage: fetch Jan 14 13:21:25.832527 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.832538 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.832636 ignition[878]: parsed url from cmdline: "" Jan 14 13:21:25.832638 ignition[878]: no config URL provided Jan 14 13:21:25.832643 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.832649 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.832674 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:21:25.918440 ignition[878]: GET result: OK Jan 14 13:21:25.918534 ignition[878]: config has been read from IMDS userdata Jan 14 13:21:25.918555 ignition[878]: parsing config with SHA512: 3b40ea0a5b7b5b06f0d986b7ca480582c206085b64a6ae07ae388390398da7850ec05e6876a0b2e2214be8e67ea2da2fda88ac5ecd213cffca9a0dae9d9572df Jan 14 13:21:25.925226 unknown[878]: fetched base config from "system" Jan 14 13:21:25.925239 unknown[878]: fetched base config from "system" Jan 14 13:21:25.925546 ignition[878]: fetch: fetch complete Jan 14 13:21:25.925248 unknown[878]: fetched user config from "azure" Jan 14 13:21:25.925552 ignition[878]: fetch: fetch passed Jan 14 13:21:25.925597 ignition[878]: Ignition finished successfully Jan 14 13:21:25.938602 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:21:25.949905 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:21:25.966047 ignition[884]: Ignition 2.20.0 Jan 14 13:21:25.966058 ignition[884]: Stage: kargs Jan 14 13:21:25.968495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:21:25.966271 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.966284 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.975966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:21:25.966964 ignition[884]: kargs: kargs passed Jan 14 13:21:25.967008 ignition[884]: Ignition finished successfully Jan 14 13:21:25.998134 ignition[890]: Ignition 2.20.0 Jan 14 13:21:25.998145 ignition[890]: Stage: disks Jan 14 13:21:25.998357 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:26.001542 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:21:25.998370 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:26.004480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:25.999029 ignition[890]: disks: disks passed Jan 14 13:21:26.014059 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:21:25.999073 ignition[890]: Ignition finished successfully Jan 14 13:21:26.019701 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:26.028810 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:26.030997 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:26.040899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:21:26.100342 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:21:26.105849 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:21:26.119847 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:21:26.220069 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:21:26.220655 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:21:26.223704 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:26.264901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:26.269440 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:21:26.280771 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (909) Jan 14 13:21:26.284119 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:21:26.292493 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:26.292515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:26.292530 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:26.294472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:21:26.294514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:26.305552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:21:26.315933 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:26.319909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:21:26.326325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:26.352886 systemd-networkd[870]: enP4106s1: Gained IPv6LL Jan 14 13:21:26.800912 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 13:21:26.997882 coreos-metadata[911]: Jan 14 13:21:26.997 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:27.004060 coreos-metadata[911]: Jan 14 13:21:27.004 INFO Fetch successful Jan 14 13:21:27.006760 coreos-metadata[911]: Jan 14 13:21:27.005 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:27.013896 coreos-metadata[911]: Jan 14 13:21:27.013 INFO Fetch successful Jan 14 13:21:27.021777 coreos-metadata[911]: Jan 14 13:21:27.021 INFO wrote hostname ci-4152.2.0-a-d0a677fe50 to /sysroot/etc/hostname Jan 14 13:21:27.026011 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:21:27.026153 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:27.060042 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:21:27.065782 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:21:27.071203 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:21:28.082847 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:28.092824 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:28.099916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:28.106880 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.108045 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:28.134825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:28.142241 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:21:28.142241 ignition[1033]: INFO : Stage: mount Jan 14 13:21:28.145967 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.145967 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.145967 ignition[1033]: INFO : mount: mount passed Jan 14 13:21:28.145967 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:21:28.145998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:28.165881 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:28.173479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:28.189757 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1046) Jan 14 13:21:28.195827 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.195878 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:28.198242 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:28.204760 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:28.205292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:28.226464 ignition[1063]: INFO : Ignition 2.20.0 Jan 14 13:21:28.226464 ignition[1063]: INFO : Stage: files Jan 14 13:21:28.231031 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.231031 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.231031 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:28.263306 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:28.271344 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:28.355418 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:28.362030 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:28.362030 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:28.355969 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 13:21:28.375465 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.379711 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.397470 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:21:28.927192 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: files passed Jan 14 13:21:30.014190 ignition[1063]: INFO : Ignition finished successfully Jan 14 13:21:30.020391 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:30.036987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:30.040872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:30.054361 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:30.055681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:30.064525 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.068285 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.072023 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.077186 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:30.083496 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:30.092947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:30.119004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:30.119121 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:30.124315 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:30.129223 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:30.131553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:30.144954 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:30.158294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:30.168897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:30.180349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:30.183056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:30.188103 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:30.194233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:30.196263 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:30.201637 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:30.206033 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:30.213388 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:30.217915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:30.220463 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:30.225267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:30.229958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:30.237445 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:30.242053 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:30.246306 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:30.247143 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:30.247269 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:30.248246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:30.248633 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:30.248949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:30.255373 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:30.259650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:30.259822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:30.264451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:30.264600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:30.268936 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:30.269055 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:30.273726 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:30.273877 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:30.290820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:30.302004 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:30.319665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:30.327218 ignition[1116]: INFO : Ignition 2.20.0 Jan 14 13:21:30.327218 ignition[1116]: INFO : Stage: umount Jan 14 13:21:30.327218 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:30.327218 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:30.327218 ignition[1116]: INFO : umount: umount passed Jan 14 13:21:30.327218 ignition[1116]: INFO : Ignition finished successfully Jan 14 13:21:30.319894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:30.323986 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:30.324123 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:30.331022 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:30.331113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:30.338553 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:30.338816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:30.346001 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:30.346048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:30.352761 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:30.352816 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:30.372063 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:30.373867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:30.373919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:30.378139 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:30.380455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:30.382771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:30.388187 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:30.390215 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:30.394140 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:30.394192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:30.398826 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:30.398874 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:30.404298 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:30.404357 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:30.408445 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:30.408501 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:30.413239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:30.417171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:30.420691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:30.421372 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:30.421756 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:30.423836 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 13:21:30.431130 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:30.431245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:30.436292 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:30.436385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:30.441521 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:30.441590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:30.470947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:30.479506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:30.479590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:30.482305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:30.482355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:30.487065 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:30.487122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:30.492483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:30.492540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:30.506343 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:30.531336 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:30.532201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:30.544640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:30.544688 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:30.549495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:30.549535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:30.553944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:30.554003 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:30.559018 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:30.578369 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: Data path switched from VF: enP4106s1 Jan 14 13:21:30.559063 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:30.562908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:30.562963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:30.586077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:30.588450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:30.588504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:30.596027 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:21:30.596082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:30.601640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:30.601695 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:30.606606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:30.606658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:30.614830 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:30.614934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:30.619850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:30.619932 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:30.717123 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:30.717273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:30.722522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:30.726323 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:30.726400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:30.741914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:30.749777 systemd[1]: Switching root. Jan 14 13:21:30.857344 systemd-journald[177]: Journal stopped Jan 14 13:21:20.098762 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:21:20.098799 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.098813 kernel: BIOS-provided physical RAM map: Jan 14 13:21:20.098824 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:21:20.098834 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:21:20.098857 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:21:20.098871 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:21:20.098886 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:21:20.098897 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:21:20.098909 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:21:20.098921 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:21:20.098932 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:21:20.098943 kernel: NX (Execute Disable) protection: active Jan 14 13:21:20.098955 kernel: APIC: Static calls initialized Jan 14 13:21:20.098973 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:21:20.098997 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 RNG=0x3ffd1018 Jan 14 13:21:20.099009 kernel: random: crng init done Jan 14 13:21:20.099022 kernel: secureboot: Secure boot disabled Jan 14 13:21:20.099034 kernel: SMBIOS 3.1.0 present. Jan 14 13:21:20.099047 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:21:20.099059 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:21:20.099071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:21:20.099084 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:21:20.099096 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:21:20.099110 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:21:20.099123 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:21:20.099152 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.099165 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.099179 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:21:20.099192 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:21:20.099205 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:21:20.099219 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:21:20.099232 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.099247 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:21:20.099260 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:21:20.099277 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:21:20.099289 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.099302 kernel: Using GB pages for direct mapping Jan 14 13:21:20.099315 kernel: ACPI: Early table checksum verification disabled Jan 14 13:21:20.099329 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:21:20.099347 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099364 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099378 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:21:20.099391 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:21:20.099405 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099420 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099433 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099449 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099464 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099477 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099492 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.099506 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:21:20.099520 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:21:20.099534 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:21:20.099548 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:21:20.099562 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:21:20.099578 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:21:20.099592 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:21:20.099606 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:21:20.099620 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:21:20.099634 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:21:20.099648 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:21:20.099662 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:21:20.099676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:21:20.099692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:21:20.099706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:21:20.099720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:21:20.099734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:21:20.099748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:21:20.099762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:21:20.099775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:21:20.099789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:21:20.099804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:21:20.099820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:21:20.099834 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:21:20.101417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:21:20.101435 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:21:20.101445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:21:20.101455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:21:20.101464 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:21:20.101473 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:21:20.101482 kernel: Zone ranges: Jan 14 13:21:20.101497 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:21:20.101505 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:21:20.101514 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.101523 kernel: Movable zone start for each node Jan 14 13:21:20.101530 kernel: Early memory node ranges Jan 14 13:21:20.101541 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:21:20.101548 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:21:20.101558 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:21:20.101566 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.101578 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:21:20.101586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:21:20.101595 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:21:20.101604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:21:20.101612 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:21:20.101622 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:21:20.101630 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:21:20.101640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:21:20.101648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:21:20.101660 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:21:20.101668 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:21:20.101677 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:21:20.101686 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:21:20.101694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:21:20.101705 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:21:20.101713 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:21:20.101723 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:21:20.101731 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:21:20.101743 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:21:20.101750 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:21:20.101761 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.101770 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:21:20.101780 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:21:20.101790 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:21:20.101799 kernel: Fallback order for Node 0: 0 Jan 14 13:21:20.101808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:21:20.101820 kernel: Policy zone: Normal Jan 14 13:21:20.101836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:21:20.101858 kernel: software IO TLB: area num 2. Jan 14 13:21:20.101873 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:21:20.101881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:21:20.101892 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:21:20.101900 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:21:20.101911 kernel: Dynamic Preempt: voluntary Jan 14 13:21:20.101919 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:21:20.101931 kernel: rcu: RCU event tracing is enabled. Jan 14 13:21:20.101940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:21:20.101953 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:21:20.101962 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:21:20.101973 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:21:20.101981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:21:20.101992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:21:20.102002 kernel: Using NULL legacy PIC Jan 14 13:21:20.102013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:21:20.102021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:21:20.102032 kernel: Console: colour dummy device 80x25 Jan 14 13:21:20.102041 kernel: printk: console [tty1] enabled Jan 14 13:21:20.102050 kernel: printk: console [ttyS0] enabled Jan 14 13:21:20.102060 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:21:20.102069 kernel: ACPI: Core revision 20230628 Jan 14 13:21:20.102079 kernel: Failed to register legacy timer interrupt Jan 14 13:21:20.102087 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:21:20.102100 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:21:20.102108 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:21:20.102119 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:21:20.102127 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:21:20.102138 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:21:20.102147 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:21:20.102157 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:21:20.102168 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:21:20.102176 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:21:20.102186 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:21:20.102194 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:21:20.102202 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:21:20.102210 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:21:20.102217 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:21:20.102225 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:21:20.102233 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:21:20.102241 kernel: RETBleed: Vulnerable Jan 14 13:21:20.102249 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:21:20.102256 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.102266 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.102276 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:21:20.102285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:21:20.102295 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:21:20.102306 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:21:20.102314 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:21:20.102325 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:21:20.102333 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:21:20.102343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:21:20.102351 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:21:20.102360 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:21:20.102371 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:21:20.102380 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:21:20.102390 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:21:20.102398 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:21:20.102409 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:21:20.102417 kernel: landlock: Up and running. Jan 14 13:21:20.102427 kernel: SELinux: Initializing. Jan 14 13:21:20.102435 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.102446 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.102454 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:21:20.102465 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102475 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102486 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.102494 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:21:20.102505 kernel: signal: max sigframe size: 3632 Jan 14 13:21:20.102513 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:21:20.102524 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:21:20.102532 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:21:20.102542 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:21:20.102551 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:21:20.102562 kernel: .... node #0, CPUs: #1 Jan 14 13:21:20.102572 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:21:20.102581 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:21:20.102592 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:21:20.102600 kernel: smpboot: Max logical packages: 1 Jan 14 13:21:20.102611 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:21:20.102620 kernel: devtmpfs: initialized Jan 14 13:21:20.102630 kernel: x86/mm: Memory block size: 128MB Jan 14 13:21:20.102643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:21:20.102654 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:21:20.102664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:21:20.102675 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:21:20.102683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:21:20.102693 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:21:20.102702 kernel: audit: type=2000 audit(1736860878.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:21:20.102711 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:21:20.102721 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:21:20.102732 kernel: cpuidle: using governor menu Jan 14 13:21:20.102741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:21:20.102751 kernel: dca service started, version 1.12.1 Jan 14 13:21:20.102760 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:21:20.102769 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:21:20.102779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:21:20.102788 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:21:20.102798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:21:20.102806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:21:20.102819 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:21:20.102827 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:21:20.102838 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:21:20.102852 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:21:20.102862 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:21:20.102870 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:21:20.102881 kernel: ACPI: Interpreter enabled Jan 14 13:21:20.102889 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:21:20.102900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:21:20.102913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:21:20.102923 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:21:20.102935 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:21:20.102946 kernel: iommu: Default domain type: Translated Jan 14 13:21:20.102958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:21:20.102971 kernel: efivars: Registered efivars operations Jan 14 13:21:20.102983 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:21:20.102998 kernel: PCI: System does not support PCI Jan 14 13:21:20.103011 kernel: vgaarb: loaded Jan 14 13:21:20.103030 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:21:20.103044 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:21:20.103059 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:21:20.103073 kernel: pnp: PnP ACPI init Jan 14 13:21:20.103088 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:21:20.103102 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:21:20.103116 kernel: NET: Registered PF_INET protocol family Jan 14 13:21:20.103131 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:21:20.103145 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:21:20.103162 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:21:20.103189 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:21:20.103204 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:21:20.103217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:21:20.103230 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.103243 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.103258 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:21:20.103273 kernel: NET: Registered PF_XDP protocol family Jan 14 13:21:20.103288 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:21:20.103306 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:21:20.103321 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 14 13:21:20.103335 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:21:20.103349 kernel: Initialise system trusted keyrings Jan 14 13:21:20.103364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:21:20.103379 kernel: Key type asymmetric registered Jan 14 13:21:20.103391 kernel: Asymmetric key parser 'x509' registered Jan 14 13:21:20.103405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:21:20.103418 kernel: io scheduler mq-deadline registered Jan 14 13:21:20.103435 kernel: io scheduler kyber registered Jan 14 13:21:20.103451 kernel: io scheduler bfq registered Jan 14 13:21:20.103464 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:21:20.103478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:21:20.103492 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:21:20.103506 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:21:20.103521 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:21:20.103698 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:21:20.103833 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:21:19 UTC (1736860879) Jan 14 13:21:20.106747 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:21:20.106770 kernel: intel_pstate: CPU model not supported Jan 14 13:21:20.106787 kernel: efifb: probing for efifb Jan 14 13:21:20.106802 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:21:20.106817 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:21:20.106832 kernel: efifb: scrolling: redraw Jan 14 13:21:20.106856 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:21:20.106870 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:20.106888 kernel: fb0: EFI VGA frame buffer device Jan 14 13:21:20.106902 kernel: pstore: Using crash dump compression: deflate Jan 14 13:21:20.106917 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:21:20.106930 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:21:20.106945 kernel: Segment Routing with IPv6 Jan 14 13:21:20.106961 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:21:20.106977 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:21:20.106992 kernel: Key type dns_resolver registered Jan 14 13:21:20.107007 kernel: IPI shorthand broadcast: enabled Jan 14 13:21:20.107025 kernel: sched_clock: Marking stable (910003200, 50565200)->(1198601700, -238033300) Jan 14 13:21:20.107041 kernel: registered taskstats version 1 Jan 14 13:21:20.107056 kernel: Loading compiled-in X.509 certificates Jan 14 13:21:20.107071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:21:20.107086 kernel: Key type .fscrypt registered Jan 14 13:21:20.107101 kernel: Key type fscrypt-provisioning registered Jan 14 13:21:20.107116 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:21:20.107131 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:21:20.107149 kernel: ima: No architecture policies found Jan 14 13:21:20.107164 kernel: clk: Disabling unused clocks Jan 14 13:21:20.107179 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:21:20.107195 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:21:20.107210 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:21:20.107226 kernel: Run /init as init process Jan 14 13:21:20.107241 kernel: with arguments: Jan 14 13:21:20.107256 kernel: /init Jan 14 13:21:20.107275 kernel: with environment: Jan 14 13:21:20.107294 kernel: HOME=/ Jan 14 13:21:20.107317 kernel: TERM=linux Jan 14 13:21:20.107336 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:21:20.107360 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:20.107383 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:20.107404 systemd[1]: Detected architecture x86-64. Jan 14 13:21:20.107424 systemd[1]: Running in initrd. Jan 14 13:21:20.107444 systemd[1]: No hostname configured, using default hostname. Jan 14 13:21:20.107467 systemd[1]: Hostname set to . Jan 14 13:21:20.107488 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:20.107509 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:21:20.107530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:20.107551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:20.107572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:21:20.107593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:20.107614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:21:20.107638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:21:20.107662 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:21:20.107684 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:21:20.107704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:20.107725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:20.107746 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:20.107767 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:20.107790 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:20.107811 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:20.107832 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:20.107870 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:20.107886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:21:20.107901 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:21:20.107917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:20.107932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:20.107948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:20.107968 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:20.107983 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:21:20.107999 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:20.108015 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:21:20.108030 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:21:20.108046 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:20.108062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:20.108078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:20.108117 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:21:20.108152 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:20.108168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:20.108184 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:21:20.108207 systemd-journald[177]: Journal started Jan 14 13:21:20.108252 systemd-journald[177]: Runtime Journal (/run/log/journal/b1ef8bb098f64736b1b38ea1799a5044) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:20.090923 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:21:20.112893 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:20.120121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:20.138029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:20.144068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:20.152892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:20.166834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:21:20.169957 kernel: Bridge firewalling registered Jan 14 13:21:20.170193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:20.173065 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:21:20.182103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:20.185100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:20.190359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:20.198716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:20.204694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:20.215002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:21:20.219815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:20.231978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:20.234950 dracut-cmdline[208]: dracut-dracut-053 Jan 14 13:21:20.241405 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.257066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:20.304723 systemd-resolved[222]: Positive Trust Anchors: Jan 14 13:21:20.307429 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:20.311283 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:20.330182 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 14 13:21:20.337095 kernel: SCSI subsystem initialized Jan 14 13:21:20.331273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:20.343321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:20.349945 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:21:20.360870 kernel: iscsi: registered transport (tcp) Jan 14 13:21:20.381957 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:21:20.382017 kernel: QLogic iSCSI HBA Driver Jan 14 13:21:20.417306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:20.425009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:21:20.452719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:21:20.452802 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:21:20.455773 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:21:20.494878 kernel: raid6: avx512x4 gen() 18177 MB/s Jan 14 13:21:20.513862 kernel: raid6: avx512x2 gen() 18471 MB/s Jan 14 13:21:20.532862 kernel: raid6: avx512x1 gen() 18489 MB/s Jan 14 13:21:20.551861 kernel: raid6: avx2x4 gen() 18383 MB/s Jan 14 13:21:20.570862 kernel: raid6: avx2x2 gen() 18376 MB/s Jan 14 13:21:20.590539 kernel: raid6: avx2x1 gen() 13877 MB/s Jan 14 13:21:20.590569 kernel: raid6: using algorithm avx512x1 gen() 18489 MB/s Jan 14 13:21:20.611764 kernel: raid6: .... xor() 26900 MB/s, rmw enabled Jan 14 13:21:20.611824 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:21:20.634878 kernel: xor: automatically using best checksumming function avx Jan 14 13:21:20.779874 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:21:20.789836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:20.796107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:20.814423 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:21:20.821480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:20.833005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:21:20.858428 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 14 13:21:20.886461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:20.897982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:20.942005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:20.956081 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:21:20.976794 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:20.982838 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:20.989333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:20.994863 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:21.005678 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:21:21.017065 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:21:21.048160 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:21.060891 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:21:21.067888 kernel: AES CTR mode by8 optimization enabled Jan 14 13:21:21.068831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:21.075980 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:21:21.072625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.077803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.088939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.106083 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:21:21.106115 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:21:21.089195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.094369 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.116212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.125040 kernel: PTP clock support registered Jan 14 13:21:21.127043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.798935 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:21:21.798971 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:21:21.798990 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:21:21.799007 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:21:21.799024 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:21:21.127133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.792335 systemd-resolved[222]: Clock change detected. Flushing caches. Jan 14 13:21:21.805079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.817757 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:21:21.825747 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:21:21.825777 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:21:21.830749 kernel: scsi host0: storvsc_host_t Jan 14 13:21:21.830958 kernel: scsi host1: storvsc_host_t Jan 14 13:21:21.833005 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:21:21.844396 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:21:21.844457 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:21:21.856207 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:21:21.856972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.864962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.878756 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:21:21.884756 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:21:21.891539 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:21:21.892692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.907574 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:21:21.910413 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:21:21.910436 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:21:21.921994 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:21:21.940530 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:21:21.940675 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:21:21.940871 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:21:21.941030 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:21:21.941189 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:21.941211 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:21:22.039769 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: VF slot 1 added Jan 14 13:21:22.046761 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:21:22.050816 kernel: hv_pci ad28d86f-100a-491c-80fa-373bada020f4: PCI VMBus probing: Using version 0x10004 Jan 14 13:21:22.096353 kernel: hv_pci ad28d86f-100a-491c-80fa-373bada020f4: PCI host bridge to bus 100a:00 Jan 14 13:21:22.096884 kernel: pci_bus 100a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:21:22.097086 kernel: pci_bus 100a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:21:22.097245 kernel: pci 100a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:21:22.097433 kernel: pci 100a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.097616 kernel: pci 100a:00:02.0: enabling Extended Tags Jan 14 13:21:22.097817 kernel: pci 100a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 100a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:21:22.097987 kernel: pci_bus 100a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:21:22.098133 kernel: pci 100a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.259211 kernel: mlx5_core 100a:00:02.0: enabling device (0000 -> 0002) Jan 14 13:21:22.492593 kernel: mlx5_core 100a:00:02.0: firmware version: 14.30.5000 Jan 14 13:21:22.492819 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: VF registering: eth1 Jan 14 13:21:22.493213 kernel: mlx5_core 100a:00:02.0 eth1: joined to eth0 Jan 14 13:21:22.493417 kernel: mlx5_core 100a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:21:22.502760 kernel: mlx5_core 100a:00:02.0 enP4106s1: renamed from eth1 Jan 14 13:21:22.529233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:21:22.605850 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (457) Jan 14 13:21:22.621371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:22.641283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:21:22.655750 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (441) Jan 14 13:21:22.669363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:21:22.673201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:21:22.699173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:21:22.724409 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:22.733754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.742760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.743234 disk-uuid[600]: The operation has completed successfully. Jan 14 13:21:23.830530 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:21:23.830660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:21:23.847882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:21:23.854473 sh[686]: Success Jan 14 13:21:23.883829 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:21:24.161512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:21:24.172862 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:21:24.178873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:21:24.197642 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:21:24.197718 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.200909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:21:24.203460 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:21:24.206413 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:21:24.558325 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:21:24.561709 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:21:24.572985 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:21:24.582907 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:21:24.603910 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.603962 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.603985 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:24.625002 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:24.634905 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:21:24.639611 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.647258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:21:24.656945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:21:24.677747 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:24.689891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:24.710358 systemd-networkd[870]: lo: Link UP Jan 14 13:21:24.710369 systemd-networkd[870]: lo: Gained carrier Jan 14 13:21:24.712436 systemd-networkd[870]: Enumeration completed Jan 14 13:21:24.712689 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:24.715086 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.715090 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:24.720889 systemd[1]: Reached target network.target - Network. Jan 14 13:21:24.777762 kernel: mlx5_core 100a:00:02.0 enP4106s1: Link up Jan 14 13:21:24.808081 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: Data path switched to VF: enP4106s1 Jan 14 13:21:24.808274 systemd-networkd[870]: enP4106s1: Link UP Jan 14 13:21:24.809666 systemd-networkd[870]: eth0: Link UP Jan 14 13:21:24.809882 systemd-networkd[870]: eth0: Gained carrier Jan 14 13:21:24.809895 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.818922 systemd-networkd[870]: enP4106s1: Gained carrier Jan 14 13:21:24.843789 systemd-networkd[870]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:25.790169 ignition[842]: Ignition 2.20.0 Jan 14 13:21:25.790182 ignition[842]: Stage: fetch-offline Jan 14 13:21:25.790225 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.790234 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.796990 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:25.790334 ignition[842]: parsed url from cmdline: "" Jan 14 13:21:25.790338 ignition[842]: no config URL provided Jan 14 13:21:25.790345 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.790355 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.790361 ignition[842]: failed to fetch config: resource requires networking Jan 14 13:21:25.790568 ignition[842]: Ignition finished successfully Jan 14 13:21:25.818894 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:21:25.832321 ignition[878]: Ignition 2.20.0 Jan 14 13:21:25.832331 ignition[878]: Stage: fetch Jan 14 13:21:25.832527 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.832538 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.832636 ignition[878]: parsed url from cmdline: "" Jan 14 13:21:25.832638 ignition[878]: no config URL provided Jan 14 13:21:25.832643 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.832649 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.832674 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:21:25.918440 ignition[878]: GET result: OK Jan 14 13:21:25.918534 ignition[878]: config has been read from IMDS userdata Jan 14 13:21:25.918555 ignition[878]: parsing config with SHA512: 3b40ea0a5b7b5b06f0d986b7ca480582c206085b64a6ae07ae388390398da7850ec05e6876a0b2e2214be8e67ea2da2fda88ac5ecd213cffca9a0dae9d9572df Jan 14 13:21:25.925226 unknown[878]: fetched base config from "system" Jan 14 13:21:25.925239 unknown[878]: fetched base config from "system" Jan 14 13:21:25.925546 ignition[878]: fetch: fetch complete Jan 14 13:21:25.925248 unknown[878]: fetched user config from "azure" Jan 14 13:21:25.925552 ignition[878]: fetch: fetch passed Jan 14 13:21:25.925597 ignition[878]: Ignition finished successfully Jan 14 13:21:25.938602 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:21:25.949905 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:21:25.966047 ignition[884]: Ignition 2.20.0 Jan 14 13:21:25.966058 ignition[884]: Stage: kargs Jan 14 13:21:25.968495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:21:25.966271 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.966284 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.975966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:21:25.966964 ignition[884]: kargs: kargs passed Jan 14 13:21:25.967008 ignition[884]: Ignition finished successfully Jan 14 13:21:25.998134 ignition[890]: Ignition 2.20.0 Jan 14 13:21:25.998145 ignition[890]: Stage: disks Jan 14 13:21:25.998357 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:26.001542 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:21:25.998370 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:26.004480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:25.999029 ignition[890]: disks: disks passed Jan 14 13:21:26.014059 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:21:25.999073 ignition[890]: Ignition finished successfully Jan 14 13:21:26.019701 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:26.028810 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:26.030997 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:26.040899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:21:26.100342 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:21:26.105849 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:21:26.119847 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:21:26.220069 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:21:26.220655 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:21:26.223704 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:26.264901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:26.269440 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:21:26.280771 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (909) Jan 14 13:21:26.284119 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:21:26.292493 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:26.292515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:26.292530 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:26.294472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:21:26.294514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:26.305552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:21:26.315933 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:26.319909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:21:26.326325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:26.352886 systemd-networkd[870]: enP4106s1: Gained IPv6LL Jan 14 13:21:26.800912 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 13:21:26.997882 coreos-metadata[911]: Jan 14 13:21:26.997 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:27.004060 coreos-metadata[911]: Jan 14 13:21:27.004 INFO Fetch successful Jan 14 13:21:27.006760 coreos-metadata[911]: Jan 14 13:21:27.005 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:27.013896 coreos-metadata[911]: Jan 14 13:21:27.013 INFO Fetch successful Jan 14 13:21:27.021777 coreos-metadata[911]: Jan 14 13:21:27.021 INFO wrote hostname ci-4152.2.0-a-d0a677fe50 to /sysroot/etc/hostname Jan 14 13:21:27.026011 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:21:27.026153 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:27.060042 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:21:27.065782 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:21:27.071203 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:21:28.082847 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:28.092824 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:28.099916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:28.106880 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.108045 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:28.134825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:28.142241 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:21:28.142241 ignition[1033]: INFO : Stage: mount Jan 14 13:21:28.145967 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.145967 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.145967 ignition[1033]: INFO : mount: mount passed Jan 14 13:21:28.145967 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:21:28.145998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:28.165881 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:28.173479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:28.189757 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1046) Jan 14 13:21:28.195827 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.195878 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:28.198242 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:28.204760 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:28.205292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:28.226464 ignition[1063]: INFO : Ignition 2.20.0 Jan 14 13:21:28.226464 ignition[1063]: INFO : Stage: files Jan 14 13:21:28.231031 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.231031 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.231031 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:28.263306 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:28.271344 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:28.355418 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:28.362030 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:28.362030 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:28.355969 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 13:21:28.375465 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.379711 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.397470 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:28.402139 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:21:28.927192 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:30.014190 ignition[1063]: INFO : files: files passed Jan 14 13:21:30.014190 ignition[1063]: INFO : Ignition finished successfully Jan 14 13:21:30.020391 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:30.036987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:30.040872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:30.054361 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:30.055681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:30.064525 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.068285 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.072023 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:30.077186 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:30.083496 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:30.092947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:30.119004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:30.119121 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:30.124315 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:30.129223 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:30.131553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:30.144954 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:30.158294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:30.168897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:30.180349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:30.183056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:30.188103 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:30.194233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:30.196263 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:30.201637 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:30.206033 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:30.213388 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:30.217915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:30.220463 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:30.225267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:30.229958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:30.237445 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:30.242053 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:30.246306 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:30.247143 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:30.247269 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:30.248246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:30.248633 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:30.248949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:30.255373 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:30.259650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:30.259822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:30.264451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:30.264600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:30.268936 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:30.269055 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:30.273726 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:30.273877 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:30.290820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:30.302004 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:30.319665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:30.327218 ignition[1116]: INFO : Ignition 2.20.0 Jan 14 13:21:30.327218 ignition[1116]: INFO : Stage: umount Jan 14 13:21:30.327218 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:30.327218 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:30.327218 ignition[1116]: INFO : umount: umount passed Jan 14 13:21:30.327218 ignition[1116]: INFO : Ignition finished successfully Jan 14 13:21:30.319894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:30.323986 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:30.324123 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:30.331022 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:30.331113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:30.338553 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:30.338816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:30.346001 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:30.346048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:30.352761 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:30.352816 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:30.372063 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:30.373867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:30.373919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:30.378139 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:30.380455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:30.382771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:30.388187 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:30.390215 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:30.394140 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:30.394192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:30.398826 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:30.398874 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:30.404298 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:30.404357 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:30.408445 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:30.408501 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:30.413239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:30.417171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:30.420691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:30.421372 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:30.421756 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:30.423836 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 13:21:30.431130 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:30.431245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:30.436292 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:30.436385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:30.441521 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:30.441590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:30.470947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:30.479506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:30.479590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:30.482305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:30.482355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:30.487065 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:30.487122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:30.492483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:30.492540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:30.506343 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:30.531336 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:30.532201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:30.544640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:30.544688 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:30.549495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:30.549535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:30.553944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:30.554003 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:30.559018 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:30.578369 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: Data path switched from VF: enP4106s1 Jan 14 13:21:30.559063 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:30.562908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:30.562963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:30.586077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:30.588450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:30.588504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:30.596027 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:21:30.596082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:30.601640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:30.601695 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:30.606606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:30.606658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:30.614830 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:30.614934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:30.619850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:30.619932 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:30.717123 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:30.717273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:30.722522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:30.726323 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:30.726400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:30.741914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:30.749777 systemd[1]: Switching root. Jan 14 13:21:30.857344 systemd-journald[177]: Journal stopped Jan 14 13:21:36.154307 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:21:36.154340 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:21:36.154353 kernel: SELinux: policy capability open_perms=1 Jan 14 13:21:36.154362 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:21:36.154371 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:21:36.154381 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:21:36.154391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:21:36.154405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:21:36.154416 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:21:36.154425 kernel: audit: type=1403 audit(1736860892.504:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:21:36.154437 systemd[1]: Successfully loaded SELinux policy in 239.482ms. Jan 14 13:21:36.154449 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.872ms. Jan 14 13:21:36.154460 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:36.154473 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:36.154486 systemd[1]: Detected architecture x86-64. Jan 14 13:21:36.154498 systemd[1]: Detected first boot. Jan 14 13:21:36.154510 systemd[1]: Hostname set to . Jan 14 13:21:36.154521 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:36.154533 zram_generator::config[1160]: No configuration found. Jan 14 13:21:36.154547 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:21:36.154559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:21:36.154570 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:21:36.154581 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:21:36.154594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:21:36.154604 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:21:36.154617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:21:36.154633 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:21:36.154644 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:21:36.154657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:21:36.154667 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:21:36.154679 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:21:36.154691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:36.154702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:36.154715 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:21:36.154728 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:21:36.154754 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:21:36.154765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:36.154779 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:21:36.154791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:36.154802 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:21:36.154820 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:21:36.154833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:36.154848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:21:36.154859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:36.154872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:36.154883 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:36.154895 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:36.154908 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:21:36.154918 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:21:36.154933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:36.154946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:36.154957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:36.154970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:21:36.154981 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:21:36.154996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:21:36.155009 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:21:36.155020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:36.155034 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:21:36.155045 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:21:36.155057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:21:36.155071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:21:36.155082 systemd[1]: Reached target machines.target - Containers. Jan 14 13:21:36.155097 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:21:36.155111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:36.155124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:36.155135 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:21:36.155150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:36.155160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:36.155173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:36.155184 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:21:36.155197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:36.155213 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:21:36.155223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:21:36.155236 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:21:36.155248 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:21:36.155260 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:21:36.155273 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:36.155285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:36.155298 kernel: fuse: init (API version 7.39) Jan 14 13:21:36.155311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:21:36.155323 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:21:36.155336 kernel: loop: module loaded Jan 14 13:21:36.155347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:36.155360 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:21:36.155376 systemd[1]: Stopped verity-setup.service. Jan 14 13:21:36.155393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:36.155418 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:21:36.155444 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:21:36.155473 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:21:36.155494 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:21:36.155544 systemd-journald[1249]: Collecting audit messages is disabled. Jan 14 13:21:36.155584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:21:36.155609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:21:36.155627 systemd-journald[1249]: Journal started Jan 14 13:21:36.155665 systemd-journald[1249]: Runtime Journal (/run/log/journal/cce7b0fc73b441fa999cf8f191b41bb6) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:35.489200 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:21:35.597092 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:21:35.597470 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:21:36.166966 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:36.168683 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:21:36.171651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:36.174803 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:21:36.174978 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:21:36.178419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:36.179963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:36.187202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:36.187407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:36.190659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:21:36.191083 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:21:36.195487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:36.195678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:36.199795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:36.202861 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:21:36.206811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:21:36.239928 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:21:36.258950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:21:36.267793 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:21:36.275374 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:21:36.276589 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:36.281193 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:21:36.287859 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:21:36.295540 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:21:36.296154 kernel: ACPI: bus type drm_connector registered Jan 14 13:21:36.297810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:36.301209 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:21:36.304766 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:21:36.307352 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:36.310962 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:21:36.316052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:36.318870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:36.324700 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:21:36.333602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:36.346223 systemd-journald[1249]: Time spent on flushing to /var/log/journal/cce7b0fc73b441fa999cf8f191b41bb6 is 27.459ms for 939 entries. Jan 14 13:21:36.346223 systemd-journald[1249]: System Journal (/var/log/journal/cce7b0fc73b441fa999cf8f191b41bb6) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:21:36.393621 systemd-journald[1249]: Received client request to flush runtime journal. Jan 14 13:21:36.340350 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:36.340555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:36.343462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:36.349668 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:21:36.356978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:21:36.360150 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:21:36.363345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:21:36.371703 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:21:36.391989 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:21:36.404194 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:21:36.408037 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:21:36.418837 kernel: loop0: detected capacity change from 0 to 28272 Jan 14 13:21:36.421583 udevadm[1307]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 13:21:36.454379 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 13:21:36.454805 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 13:21:36.462314 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:36.471899 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:21:36.475074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:21:36.476085 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:21:36.540197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:36.627146 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:21:36.640931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:36.660156 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 14 13:21:36.660184 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 14 13:21:36.665096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:36.759761 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:21:36.806761 kernel: loop1: detected capacity change from 0 to 210664 Jan 14 13:21:36.864766 kernel: loop2: detected capacity change from 0 to 140992 Jan 14 13:21:37.294776 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:21:37.767494 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:21:37.774960 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:37.812286 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 14 13:21:37.905762 kernel: loop4: detected capacity change from 0 to 28272 Jan 14 13:21:37.917779 kernel: loop5: detected capacity change from 0 to 210664 Jan 14 13:21:37.932842 kernel: loop6: detected capacity change from 0 to 140992 Jan 14 13:21:37.950807 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:21:37.969115 (sd-merge)[1326]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:21:37.969690 (sd-merge)[1326]: Merged extensions into '/usr'. Jan 14 13:21:37.973568 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:21:37.973583 systemd[1]: Reloading... Jan 14 13:21:38.051756 zram_generator::config[1351]: No configuration found. Jan 14 13:21:38.196297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:38.254927 systemd[1]: Reloading finished in 280 ms. Jan 14 13:21:38.288211 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:21:38.296903 systemd[1]: Starting ensure-sysext.service... Jan 14 13:21:38.305940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:38.308753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:38.316920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:38.379246 systemd[1]: Reloading requested from client PID 1410 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:21:38.379265 systemd[1]: Reloading... Jan 14 13:21:38.395807 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:21:38.396331 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:21:38.404537 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:21:38.404984 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 14 13:21:38.405070 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 14 13:21:38.437526 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:38.437542 systemd-tmpfiles[1411]: Skipping /boot Jan 14 13:21:38.477809 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:38.477826 systemd-tmpfiles[1411]: Skipping /boot Jan 14 13:21:38.502799 zram_generator::config[1461]: No configuration found. Jan 14 13:21:38.587088 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:21:38.642515 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:21:38.642622 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:21:38.646760 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:21:38.657546 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:21:38.657643 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:21:38.676252 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:21:38.680682 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:38.855319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:38.912799 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1415) Jan 14 13:21:39.067069 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:21:39.069567 systemd[1]: Reloading finished in 689 ms. Jan 14 13:21:39.099199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:39.175114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:39.210905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:39.216191 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:21:39.218374 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:21:39.242116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:21:39.245333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:39.247512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:39.257917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:39.261765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:39.264426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:39.273219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:21:39.280984 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:21:39.292131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:39.309140 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:21:39.313495 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:21:39.322995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:39.328924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:39.338453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:39.338651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:39.342569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:39.343796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:39.348206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:39.348977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:39.369306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:21:39.387656 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:21:39.393300 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:21:39.398953 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:21:39.405557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:21:39.414078 systemd[1]: Finished ensure-sysext.service. Jan 14 13:21:39.419992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:39.420396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:39.425948 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:21:39.431611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:39.443936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:39.451881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:39.461902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:39.466577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:39.466669 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:21:39.469353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:39.470029 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:39.470235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:39.475572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:39.476308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:39.487119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:39.487331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:39.490206 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:39.503143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:39.503333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:39.506787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:39.535782 augenrules[1654]: No rules Jan 14 13:21:39.537472 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:21:39.537710 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:21:39.562758 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:39.588120 systemd-resolved[1606]: Positive Trust Anchors: Jan 14 13:21:39.588138 systemd-resolved[1606]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:39.588181 systemd-resolved[1606]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:39.597008 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:21:39.598527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:39.606111 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:21:39.612348 systemd-networkd[1416]: lo: Link UP Jan 14 13:21:39.612357 systemd-networkd[1416]: lo: Gained carrier Jan 14 13:21:39.614848 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:39.616566 systemd-networkd[1416]: Enumeration completed Jan 14 13:21:39.616680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:39.619242 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:39.619257 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:39.624921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:21:39.632990 systemd-resolved[1606]: Using system hostname 'ci-4152.2.0-a-d0a677fe50'. Jan 14 13:21:39.648646 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:21:39.676746 kernel: mlx5_core 100a:00:02.0 enP4106s1: Link up Jan 14 13:21:39.696753 kernel: hv_netvsc 000d3ad5-b5dd-000d-3ad5-b5dd000d3ad5 eth0: Data path switched to VF: enP4106s1 Jan 14 13:21:39.698931 systemd-networkd[1416]: enP4106s1: Link UP Jan 14 13:21:39.699097 systemd-networkd[1416]: eth0: Link UP Jan 14 13:21:39.699102 systemd-networkd[1416]: eth0: Gained carrier Jan 14 13:21:39.699129 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:39.699617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:39.700569 systemd[1]: Reached target network.target - Network. Jan 14 13:21:39.700825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:39.707117 systemd-networkd[1416]: enP4106s1: Gained carrier Jan 14 13:21:39.728966 systemd-networkd[1416]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:39.974644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:40.327655 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:21:40.331219 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:21:40.944997 systemd-networkd[1416]: enP4106s1: Gained IPv6LL Jan 14 13:21:41.008911 systemd-networkd[1416]: eth0: Gained IPv6LL Jan 14 13:21:41.012100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:21:41.015999 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:21:42.752039 ldconfig[1290]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:21:42.769568 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:21:42.777983 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:21:42.804640 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:21:42.807975 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:42.811159 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:21:42.817809 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:21:42.820920 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:21:42.823342 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:21:42.826143 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:21:42.828930 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:21:42.828966 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:42.831068 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:42.833696 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:21:42.837451 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:21:42.848108 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:21:42.851034 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:21:42.853655 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:42.855836 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:42.858023 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:42.858056 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:42.881848 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:21:42.885906 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:21:42.894906 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:21:42.903928 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:21:42.908098 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:21:42.912918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:21:42.915263 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:21:42.915316 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:21:42.916597 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:21:42.919173 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:21:42.926812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:42.938937 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:21:42.943019 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:21:42.952593 KVP[1682]: KVP starting; pid is:1682 Jan 14 13:21:42.954962 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:21:42.958537 jq[1679]: false Jan 14 13:21:42.966388 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:21:42.975500 KVP[1682]: KVP LIC Version: 3.1 Jan 14 13:21:42.975757 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:21:42.975801 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:21:42.979721 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:21:42.980324 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:21:42.982687 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:21:42.982759 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:21:42.988238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:21:43.001248 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:21:43.001493 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:21:43.002506 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:21:43.002713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:21:43.017325 chronyd[1699]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:21:43.034990 extend-filesystems[1680]: Found loop4 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found loop5 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found loop6 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found loop7 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda1 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda2 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda3 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found usr Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda4 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda6 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda7 Jan 14 13:21:43.037897 extend-filesystems[1680]: Found sda9 Jan 14 13:21:43.037897 extend-filesystems[1680]: Checking size of /dev/sda9 Jan 14 13:21:43.079314 jq[1693]: true Jan 14 13:21:43.076051 (ntainerd)[1707]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:21:43.090143 chronyd[1699]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:21:43.092108 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:21:43.090413 chronyd[1699]: Loaded seccomp filter (level 2) Jan 14 13:21:43.123970 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:21:43.124789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:21:43.131779 jq[1712]: true Jan 14 13:21:43.135363 extend-filesystems[1680]: Old size kept for /dev/sda9 Jan 14 13:21:43.135363 extend-filesystems[1680]: Found sr0 Jan 14 13:21:43.143358 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:21:43.144196 dbus-daemon[1678]: [system] SELinux support is enabled Jan 14 13:21:43.143811 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:21:43.152336 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:21:43.157990 update_engine[1692]: I20250114 13:21:43.157913 1692 main.cc:92] Flatcar Update Engine starting Jan 14 13:21:43.161418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:21:43.161501 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:21:43.165970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:21:43.166009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:21:43.176184 update_engine[1692]: I20250114 13:21:43.176080 1692 update_check_scheduler.cc:74] Next update check in 10m3s Jan 14 13:21:43.186032 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:21:43.204888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:21:43.218206 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:21:43.247143 systemd-logind[1690]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:21:43.251908 systemd-logind[1690]: New seat seat0. Jan 14 13:21:43.256024 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:21:43.300171 coreos-metadata[1677]: Jan 14 13:21:43.299 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:43.304759 coreos-metadata[1677]: Jan 14 13:21:43.303 INFO Fetch successful Jan 14 13:21:43.304759 coreos-metadata[1677]: Jan 14 13:21:43.303 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:21:43.309327 coreos-metadata[1677]: Jan 14 13:21:43.308 INFO Fetch successful Jan 14 13:21:43.309818 coreos-metadata[1677]: Jan 14 13:21:43.309 INFO Fetching http://168.63.129.16/machine/32974179-b944-425d-8cfe-2b2db745c4f4/6d6ef8ac%2Dc170%2D49b6%2D821b%2Def209539cf4a.%5Fci%2D4152.2.0%2Da%2Dd0a677fe50?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:21:43.314819 coreos-metadata[1677]: Jan 14 13:21:43.314 INFO Fetch successful Jan 14 13:21:43.316528 coreos-metadata[1677]: Jan 14 13:21:43.314 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:43.325853 coreos-metadata[1677]: Jan 14 13:21:43.325 INFO Fetch successful Jan 14 13:21:43.337642 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1731) Jan 14 13:21:43.348025 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:21:43.350881 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:21:43.362058 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:21:43.415206 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:21:43.431234 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:21:43.551984 locksmithd[1732]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:21:43.685728 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:21:43.716644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:21:43.729052 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:21:43.737970 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:21:43.752037 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:21:43.752257 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:21:43.765116 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:21:43.796934 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:21:43.801379 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:21:43.812134 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:21:43.821123 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:21:43.823797 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:21:44.220907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:44.333181 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:44.685298 containerd[1707]: time="2025-01-14T13:21:44.684392400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:21:44.720722 containerd[1707]: time="2025-01-14T13:21:44.720664600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.722720 containerd[1707]: time="2025-01-14T13:21:44.722514500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:44.722720 containerd[1707]: time="2025-01-14T13:21:44.722554600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:21:44.722720 containerd[1707]: time="2025-01-14T13:21:44.722576500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:21:44.722907 containerd[1707]: time="2025-01-14T13:21:44.722760400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:21:44.722907 containerd[1707]: time="2025-01-14T13:21:44.722783300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.722907 containerd[1707]: time="2025-01-14T13:21:44.722858900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:44.722907 containerd[1707]: time="2025-01-14T13:21:44.722876500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723087200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723113000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723132500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723145700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723252700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723481800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723623800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:44.723759 containerd[1707]: time="2025-01-14T13:21:44.723643200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:21:44.724048 containerd[1707]: time="2025-01-14T13:21:44.723772800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:21:44.724048 containerd[1707]: time="2025-01-14T13:21:44.723844400Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:21:44.741632 containerd[1707]: time="2025-01-14T13:21:44.741585700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:21:44.742014 containerd[1707]: time="2025-01-14T13:21:44.741760100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:21:44.742014 containerd[1707]: time="2025-01-14T13:21:44.741788500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:21:44.742014 containerd[1707]: time="2025-01-14T13:21:44.741811600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:21:44.742014 containerd[1707]: time="2025-01-14T13:21:44.741832800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:21:44.742169 containerd[1707]: time="2025-01-14T13:21:44.742085100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742370100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742493500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742516300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742536300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742556000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742575200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742594400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742614600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742633800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742651500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742667400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742683600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742710100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.743764 containerd[1707]: time="2025-01-14T13:21:44.742744100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742762000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742779800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742796800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742825700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742845100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742862900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742879800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742900700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742916300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742932000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742948900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742968100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.742998000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.743017000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744239 containerd[1707]: time="2025-01-14T13:21:44.743032000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743082900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743103100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743117300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743133400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743146600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743163400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743176900Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:21:44.744754 containerd[1707]: time="2025-01-14T13:21:44.743190400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:21:44.745046 containerd[1707]: time="2025-01-14T13:21:44.743547100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:21:44.745046 containerd[1707]: time="2025-01-14T13:21:44.743612500Z" level=info msg="Connect containerd service" Jan 14 13:21:44.745046 containerd[1707]: time="2025-01-14T13:21:44.743666400Z" level=info msg="using legacy CRI server" Jan 14 13:21:44.745046 containerd[1707]: time="2025-01-14T13:21:44.743678000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:21:44.745046 containerd[1707]: time="2025-01-14T13:21:44.743875800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.746838600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747216700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747268300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747302400Z" level=info msg="Start subscribing containerd event" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747347600Z" level=info msg="Start recovering state" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747420900Z" level=info msg="Start event monitor" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747440100Z" level=info msg="Start snapshots syncer" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747453000Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.747462200Z" level=info msg="Start streaming server" Jan 14 13:21:44.752300 containerd[1707]: time="2025-01-14T13:21:44.748882100Z" level=info msg="containerd successfully booted in 0.067595s" Jan 14 13:21:44.747630 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:21:44.751250 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:21:44.754975 systemd[1]: Startup finished in 870ms (firmware) + 33.091s (loader) + 1.054s (kernel) + 11.898s (initrd) + 12.488s (userspace) = 59.403s. Jan 14 13:21:44.974200 kubelet[1848]: E0114 13:21:44.974041 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:44.976592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:44.976823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:45.208479 login[1839]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 14 13:21:45.209088 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:45.218597 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:21:45.230001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:21:45.233572 systemd-logind[1690]: New session 1 of user core. Jan 14 13:21:45.245526 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:21:45.252036 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:21:45.271286 (systemd)[1866]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:21:45.437551 systemd[1866]: Queued start job for default target default.target. Jan 14 13:21:45.445007 systemd[1866]: Created slice app.slice - User Application Slice. Jan 14 13:21:45.445050 systemd[1866]: Reached target paths.target - Paths. Jan 14 13:21:45.445069 systemd[1866]: Reached target timers.target - Timers. Jan 14 13:21:45.447611 systemd[1866]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:21:45.460747 systemd[1866]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:21:45.460899 systemd[1866]: Reached target sockets.target - Sockets. Jan 14 13:21:45.460933 systemd[1866]: Reached target basic.target - Basic System. Jan 14 13:21:45.460981 systemd[1866]: Reached target default.target - Main User Target. Jan 14 13:21:45.461017 systemd[1866]: Startup finished in 183ms. Jan 14 13:21:45.461445 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:21:45.464937 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:21:46.028301 waagent[1836]: 2025-01-14T13:21:46.028196Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:21:46.031359 waagent[1836]: 2025-01-14T13:21:46.031292Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:21:46.033585 waagent[1836]: 2025-01-14T13:21:46.033531Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:21:46.035763 waagent[1836]: 2025-01-14T13:21:46.035629Z INFO Daemon Daemon Run daemon Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.036644Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.037941Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.038829Z INFO Daemon Daemon Activate resource disk Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.039452Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.044978Z INFO Daemon Daemon Found device: None Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.045686Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.046503Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.047585Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:46.065479 waagent[1836]: 2025-01-14T13:21:46.048419Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:21:46.068684 waagent[1836]: 2025-01-14T13:21:46.068596Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:21:46.075442 waagent[1836]: 2025-01-14T13:21:46.075377Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:21:46.080928 waagent[1836]: 2025-01-14T13:21:46.080867Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:21:46.084900 waagent[1836]: 2025-01-14T13:21:46.082048Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:21:46.190964 waagent[1836]: 2025-01-14T13:21:46.188988Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:21:46.203478 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:21:46.205042 waagent[1836]: 2025-01-14T13:21:46.204969Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:21:46.208008 waagent[1836]: 2025-01-14T13:21:46.207943Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:46.220131 waagent[1836]: 2025-01-14T13:21:46.209566Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:21:46.220131 waagent[1836]: 2025-01-14T13:21:46.210454Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:21:46.220131 waagent[1836]: 2025-01-14T13:21:46.211526Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:21:46.220131 waagent[1836]: 2025-01-14T13:21:46.212331Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:21:46.221997 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:46.226279 systemd-logind[1690]: New session 2 of user core. Jan 14 13:21:46.232149 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:21:46.274111 waagent[1836]: 2025-01-14T13:21:46.274041Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:21:46.281117 waagent[1836]: 2025-01-14T13:21:46.275673Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:21:46.281117 waagent[1836]: 2025-01-14T13:21:46.275877Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:21:46.368512 waagent[1836]: 2025-01-14T13:21:46.368411Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:21:46.371543 waagent[1836]: 2025-01-14T13:21:46.370534Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:21:46.378497 waagent[1836]: 2025-01-14T13:21:46.378440Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:46.402143 waagent[1836]: 2025-01-14T13:21:46.402085Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:21:46.405491 waagent[1836]: 2025-01-14T13:21:46.405431Z INFO Daemon Jan 14 13:21:46.407110 waagent[1836]: 2025-01-14T13:21:46.407025Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e794950b-b9b4-4f63-a86b-d26aba6e4e09 eTag: 16452782301963194164 source: Fabric] Jan 14 13:21:46.419786 waagent[1836]: 2025-01-14T13:21:46.408631Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:46.419786 waagent[1836]: 2025-01-14T13:21:46.409970Z INFO Daemon Jan 14 13:21:46.419786 waagent[1836]: 2025-01-14T13:21:46.410604Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:46.419786 waagent[1836]: 2025-01-14T13:21:46.415556Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:21:46.488276 waagent[1836]: 2025-01-14T13:21:46.488204Z INFO Daemon Downloaded certificate {'thumbprint': '868C8F7417C526B1E910F1C5B4D43C1B630E7A35', 'hasPrivateKey': True} Jan 14 13:21:46.496998 waagent[1836]: 2025-01-14T13:21:46.496930Z INFO Daemon Fetch goal state completed Jan 14 13:21:46.505323 waagent[1836]: 2025-01-14T13:21:46.505280Z INFO Daemon Daemon Starting provisioning Jan 14 13:21:46.511246 waagent[1836]: 2025-01-14T13:21:46.506235Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:21:46.511246 waagent[1836]: 2025-01-14T13:21:46.507378Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-d0a677fe50] Jan 14 13:21:46.525373 waagent[1836]: 2025-01-14T13:21:46.525312Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-d0a677fe50] Jan 14 13:21:46.533128 waagent[1836]: 2025-01-14T13:21:46.526637Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:21:46.533128 waagent[1836]: 2025-01-14T13:21:46.527552Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:21:46.555020 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:46.555028 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:46.555074 systemd-networkd[1416]: eth0: DHCP lease lost Jan 14 13:21:46.556375 waagent[1836]: 2025-01-14T13:21:46.556318Z INFO Daemon Daemon Create user account if not exists Jan 14 13:21:46.569691 waagent[1836]: 2025-01-14T13:21:46.558146Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:21:46.569691 waagent[1836]: 2025-01-14T13:21:46.558973Z INFO Daemon Daemon Configure sudoer Jan 14 13:21:46.569691 waagent[1836]: 2025-01-14T13:21:46.559690Z INFO Daemon Daemon Configure sshd Jan 14 13:21:46.569691 waagent[1836]: 2025-01-14T13:21:46.560054Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:21:46.569691 waagent[1836]: 2025-01-14T13:21:46.560331Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:21:46.572817 systemd-networkd[1416]: eth0: DHCPv6 lease lost Jan 14 13:21:46.608829 systemd-networkd[1416]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:47.665222 waagent[1836]: 2025-01-14T13:21:47.665163Z INFO Daemon Daemon Provisioning complete Jan 14 13:21:47.676505 waagent[1836]: 2025-01-14T13:21:47.676427Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:21:47.682524 waagent[1836]: 2025-01-14T13:21:47.677497Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:21:47.682524 waagent[1836]: 2025-01-14T13:21:47.678231Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:21:47.801586 waagent[1915]: 2025-01-14T13:21:47.801486Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:21:47.802070 waagent[1915]: 2025-01-14T13:21:47.801657Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:21:47.802070 waagent[1915]: 2025-01-14T13:21:47.801760Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:21:47.856690 waagent[1915]: 2025-01-14T13:21:47.856599Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:21:47.856943 waagent[1915]: 2025-01-14T13:21:47.856892Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:47.857034 waagent[1915]: 2025-01-14T13:21:47.856993Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:47.865023 waagent[1915]: 2025-01-14T13:21:47.864960Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:47.871320 waagent[1915]: 2025-01-14T13:21:47.871252Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:21:47.871862 waagent[1915]: 2025-01-14T13:21:47.871795Z INFO ExtHandler Jan 14 13:21:47.871961 waagent[1915]: 2025-01-14T13:21:47.871911Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 119bc1e1-de57-4bbd-9270-cbe8f0d24490 eTag: 16452782301963194164 source: Fabric] Jan 14 13:21:47.872580 waagent[1915]: 2025-01-14T13:21:47.872503Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:47.873289 waagent[1915]: 2025-01-14T13:21:47.873221Z INFO ExtHandler Jan 14 13:21:47.873368 waagent[1915]: 2025-01-14T13:21:47.873328Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:47.877138 waagent[1915]: 2025-01-14T13:21:47.877090Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:21:47.941603 waagent[1915]: 2025-01-14T13:21:47.941479Z INFO ExtHandler Downloaded certificate {'thumbprint': '868C8F7417C526B1E910F1C5B4D43C1B630E7A35', 'hasPrivateKey': True} Jan 14 13:21:47.942099 waagent[1915]: 2025-01-14T13:21:47.942040Z INFO ExtHandler Fetch goal state completed Jan 14 13:21:47.953759 waagent[1915]: 2025-01-14T13:21:47.953693Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1915 Jan 14 13:21:47.953934 waagent[1915]: 2025-01-14T13:21:47.953883Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:21:47.955500 waagent[1915]: 2025-01-14T13:21:47.955440Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:21:47.955907 waagent[1915]: 2025-01-14T13:21:47.955856Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:21:47.992980 waagent[1915]: 2025-01-14T13:21:47.992918Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:21:47.993229 waagent[1915]: 2025-01-14T13:21:47.993171Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:21:48.000440 waagent[1915]: 2025-01-14T13:21:48.000395Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:21:48.007129 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit waagent.service)... Jan 14 13:21:48.007145 systemd[1]: Reloading... Jan 14 13:21:48.095800 zram_generator::config[1965]: No configuration found. Jan 14 13:21:48.208058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:48.290627 systemd[1]: Reloading finished in 283 ms. Jan 14 13:21:48.319780 waagent[1915]: 2025-01-14T13:21:48.319433Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:21:48.328135 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit waagent.service)... Jan 14 13:21:48.328151 systemd[1]: Reloading... Jan 14 13:21:48.422825 zram_generator::config[2056]: No configuration found. Jan 14 13:21:48.539471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:48.622339 systemd[1]: Reloading finished in 293 ms. Jan 14 13:21:48.650398 waagent[1915]: 2025-01-14T13:21:48.650287Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:21:48.651750 waagent[1915]: 2025-01-14T13:21:48.650498Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:21:49.484168 waagent[1915]: 2025-01-14T13:21:49.484068Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:21:49.484959 waagent[1915]: 2025-01-14T13:21:49.484888Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:21:49.485888 waagent[1915]: 2025-01-14T13:21:49.485785Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:21:49.486368 waagent[1915]: 2025-01-14T13:21:49.486311Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:21:49.486935 waagent[1915]: 2025-01-14T13:21:49.486859Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:49.487009 waagent[1915]: 2025-01-14T13:21:49.486949Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:49.487129 waagent[1915]: 2025-01-14T13:21:49.487073Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:49.487375 waagent[1915]: 2025-01-14T13:21:49.487253Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:49.487540 waagent[1915]: 2025-01-14T13:21:49.487483Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:21:49.487690 waagent[1915]: 2025-01-14T13:21:49.487612Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:21:49.487850 waagent[1915]: 2025-01-14T13:21:49.487800Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:21:49.487850 waagent[1915]: 2025-01-14T13:21:49.487916Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:21:49.488092 waagent[1915]: 2025-01-14T13:21:49.488034Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:21:49.488430 waagent[1915]: 2025-01-14T13:21:49.488351Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:21:49.488813 waagent[1915]: 2025-01-14T13:21:49.488710Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:21:49.489198 waagent[1915]: 2025-01-14T13:21:49.489148Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:21:49.489315 waagent[1915]: 2025-01-14T13:21:49.488909Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:21:49.490707 waagent[1915]: 2025-01-14T13:21:49.490626Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:21:49.490707 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:21:49.490707 waagent[1915]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:21:49.490707 waagent[1915]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:21:49.490707 waagent[1915]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:49.490707 waagent[1915]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:49.490707 waagent[1915]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:49.502756 waagent[1915]: 2025-01-14T13:21:49.501025Z INFO ExtHandler ExtHandler Jan 14 13:21:49.502756 waagent[1915]: 2025-01-14T13:21:49.501135Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b634e345-0033-4542-b51a-e3d8ff4a8766 correlation 56c29d42-bf87-42dd-a5fe-593d943b9cb2 created: 2025-01-14T13:20:34.157722Z] Jan 14 13:21:49.502756 waagent[1915]: 2025-01-14T13:21:49.501592Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:21:49.502756 waagent[1915]: 2025-01-14T13:21:49.502344Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:21:49.543009 waagent[1915]: 2025-01-14T13:21:49.542936Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 65EF1820-8C9A-4074-861B-B78610E639D5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:21:49.554666 waagent[1915]: 2025-01-14T13:21:49.554596Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:21:49.554666 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:21:49.554666 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:21:49.554666 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d5:b5:dd brd ff:ff:ff:ff:ff:ff Jan 14 13:21:49.554666 waagent[1915]: 3: enP4106s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d5:b5:dd brd ff:ff:ff:ff:ff:ff\ altname enP4106p0s2 Jan 14 13:21:49.554666 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:21:49.554666 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:21:49.554666 waagent[1915]: 2: eth0 inet 10.200.4.36/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:21:49.554666 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:21:49.554666 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:21:49.554666 waagent[1915]: 2: eth0 inet6 fe80::20d:3aff:fed5:b5dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:49.554666 waagent[1915]: 3: enP4106s1 inet6 fe80::20d:3aff:fed5:b5dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:49.652562 waagent[1915]: 2025-01-14T13:21:49.652480Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:21:49.652562 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.652562 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.652562 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.652562 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.652562 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.652562 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.652562 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:49.652562 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:49.652562 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:49.655928 waagent[1915]: 2025-01-14T13:21:49.655867Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:21:49.655928 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.655928 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.655928 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.655928 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.655928 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:49.655928 waagent[1915]: pkts bytes target prot opt in out source destination Jan 14 13:21:49.655928 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:49.655928 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:49.655928 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:49.656313 waagent[1915]: 2025-01-14T13:21:49.656176Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:21:55.000442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:21:55.007004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:55.180935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:55.193062 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:55.684372 kubelet[2149]: E0114 13:21:55.684313 2149 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:55.688251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:55.688443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:05.749961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:22:05.755016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:06.016559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:06.028081 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:06.067691 kubelet[2165]: E0114 13:22:06.067634 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:06.070503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:06.070706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:06.883022 chronyd[1699]: Selected source PHC0 Jan 14 13:22:16.250012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:22:16.257954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:16.408369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:16.420055 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:16.938465 kubelet[2181]: E0114 13:22:16.938404 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:16.940912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:16.941112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:26.756956 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:22:27.000153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:22:27.006952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:27.099561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:27.104071 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:27.634297 kubelet[2197]: E0114 13:22:27.634239 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:27.636751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:27.636958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:28.355004 update_engine[1692]: I20250114 13:22:28.354879 1692 update_attempter.cc:509] Updating boot flags... Jan 14 13:22:28.410059 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2220) Jan 14 13:22:37.749879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:22:37.755004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:38.116436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:38.130092 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:38.172355 kubelet[2276]: E0114 13:22:38.172297 2276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:38.174844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:38.175050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:41.596037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:22:41.603027 systemd[1]: Started sshd@0-10.200.4.36:22-10.200.16.10:40102.service - OpenSSH per-connection server daemon (10.200.16.10:40102). Jan 14 13:22:42.307801 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 40102 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:42.309298 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:42.313856 systemd-logind[1690]: New session 3 of user core. Jan 14 13:22:42.322899 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:22:42.836892 systemd[1]: Started sshd@1-10.200.4.36:22-10.200.16.10:40106.service - OpenSSH per-connection server daemon (10.200.16.10:40106). Jan 14 13:22:43.442492 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 40106 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:43.443902 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:43.448462 systemd-logind[1690]: New session 4 of user core. Jan 14 13:22:43.454882 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:22:43.872940 sshd[2292]: Connection closed by 10.200.16.10 port 40106 Jan 14 13:22:43.873794 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:43.876947 systemd[1]: sshd@1-10.200.4.36:22-10.200.16.10:40106.service: Deactivated successfully. Jan 14 13:22:43.879241 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:22:43.880664 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:22:43.881558 systemd-logind[1690]: Removed session 4. Jan 14 13:22:43.980696 systemd[1]: Started sshd@2-10.200.4.36:22-10.200.16.10:40110.service - OpenSSH per-connection server daemon (10.200.16.10:40110). Jan 14 13:22:44.586332 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 40110 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:44.588017 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:44.594095 systemd-logind[1690]: New session 5 of user core. Jan 14 13:22:44.599948 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:22:45.010973 sshd[2299]: Connection closed by 10.200.16.10 port 40110 Jan 14 13:22:45.011886 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:45.015101 systemd[1]: sshd@2-10.200.4.36:22-10.200.16.10:40110.service: Deactivated successfully. Jan 14 13:22:45.017402 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:22:45.019099 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:22:45.020029 systemd-logind[1690]: Removed session 5. Jan 14 13:22:45.117813 systemd[1]: Started sshd@3-10.200.4.36:22-10.200.16.10:40116.service - OpenSSH per-connection server daemon (10.200.16.10:40116). Jan 14 13:22:45.725241 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 40116 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:45.726872 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:45.731804 systemd-logind[1690]: New session 6 of user core. Jan 14 13:22:45.740924 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:22:46.153891 sshd[2306]: Connection closed by 10.200.16.10 port 40116 Jan 14 13:22:46.154685 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:46.159002 systemd[1]: sshd@3-10.200.4.36:22-10.200.16.10:40116.service: Deactivated successfully. Jan 14 13:22:46.161149 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:22:46.162005 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:22:46.163016 systemd-logind[1690]: Removed session 6. Jan 14 13:22:46.269940 systemd[1]: Started sshd@4-10.200.4.36:22-10.200.16.10:56802.service - OpenSSH per-connection server daemon (10.200.16.10:56802). Jan 14 13:22:46.887772 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 56802 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:46.889191 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:46.894467 systemd-logind[1690]: New session 7 of user core. Jan 14 13:22:46.901158 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:22:47.449014 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:22:47.449380 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:47.478176 sudo[2314]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:47.583888 sshd[2313]: Connection closed by 10.200.16.10 port 56802 Jan 14 13:22:47.585016 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:47.588362 systemd[1]: sshd@4-10.200.4.36:22-10.200.16.10:56802.service: Deactivated successfully. Jan 14 13:22:47.590312 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:22:47.591716 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:22:47.592952 systemd-logind[1690]: Removed session 7. Jan 14 13:22:47.695017 systemd[1]: Started sshd@5-10.200.4.36:22-10.200.16.10:56816.service - OpenSSH per-connection server daemon (10.200.16.10:56816). Jan 14 13:22:48.186683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 13:22:48.192010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:48.299469 sshd[2319]: Accepted publickey for core from 10.200.16.10 port 56816 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:48.376553 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:48.384818 systemd-logind[1690]: New session 8 of user core. Jan 14 13:22:48.389931 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:22:48.441471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:48.445950 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:48.485554 kubelet[2330]: E0114 13:22:48.485499 2330 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:48.488060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:48.488273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:48.636374 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:22:48.636743 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:48.640068 sudo[2339]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:48.644830 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:22:48.645159 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:48.657105 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:22:48.683362 augenrules[2361]: No rules Jan 14 13:22:48.684786 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:22:48.685016 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:22:48.686394 sudo[2338]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:48.782166 sshd[2324]: Connection closed by 10.200.16.10 port 56816 Jan 14 13:22:48.783007 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:48.786129 systemd[1]: sshd@5-10.200.4.36:22-10.200.16.10:56816.service: Deactivated successfully. Jan 14 13:22:48.787995 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:22:48.789390 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:22:48.790473 systemd-logind[1690]: Removed session 8. Jan 14 13:22:48.888591 systemd[1]: Started sshd@6-10.200.4.36:22-10.200.16.10:56824.service - OpenSSH per-connection server daemon (10.200.16.10:56824). Jan 14 13:22:49.503154 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 56824 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:49.504607 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:49.509271 systemd-logind[1690]: New session 9 of user core. Jan 14 13:22:49.519878 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:22:49.841644 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:22:49.842021 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:51.091892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:51.098159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:51.118883 systemd[1]: Reloading requested from client PID 2411 ('systemctl') (unit session-9.scope)... Jan 14 13:22:51.118899 systemd[1]: Reloading... Jan 14 13:22:51.224822 zram_generator::config[2446]: No configuration found. Jan 14 13:22:51.359372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:51.450017 systemd[1]: Reloading finished in 330 ms. Jan 14 13:22:51.506482 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:22:51.506551 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:22:51.506870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:51.519264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:51.775056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:51.796357 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:22:51.835996 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:22:51.835996 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:22:51.835996 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:22:52.475883 kubelet[2520]: I0114 13:22:52.475784 2520 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:22:52.767891 kubelet[2520]: I0114 13:22:52.767850 2520 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:22:52.767891 kubelet[2520]: I0114 13:22:52.767878 2520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:22:52.768178 kubelet[2520]: I0114 13:22:52.768156 2520 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:22:52.786436 kubelet[2520]: I0114 13:22:52.785403 2520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:22:52.802621 kubelet[2520]: I0114 13:22:52.802588 2520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:22:52.804269 kubelet[2520]: I0114 13:22:52.804226 2520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:22:52.804461 kubelet[2520]: I0114 13:22:52.804264 2520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.36","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:22:52.804879 kubelet[2520]: I0114 13:22:52.804858 2520 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:22:52.804960 kubelet[2520]: I0114 13:22:52.804883 2520 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:22:52.805056 kubelet[2520]: I0114 13:22:52.805034 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:22:52.806077 kubelet[2520]: I0114 13:22:52.806054 2520 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:22:52.806077 kubelet[2520]: I0114 13:22:52.806077 2520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:22:52.806199 kubelet[2520]: I0114 13:22:52.806102 2520 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:22:52.806199 kubelet[2520]: I0114 13:22:52.806122 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:22:52.807275 kubelet[2520]: E0114 13:22:52.807172 2520 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:52.808797 kubelet[2520]: E0114 13:22:52.808506 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:52.810494 kubelet[2520]: I0114 13:22:52.810282 2520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:22:52.812751 kubelet[2520]: I0114 13:22:52.811943 2520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:22:52.812751 kubelet[2520]: W0114 13:22:52.812002 2520 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:22:52.812751 kubelet[2520]: I0114 13:22:52.812666 2520 server.go:1264] "Started kubelet" Jan 14 13:22:52.814206 kubelet[2520]: I0114 13:22:52.814178 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:22:52.820341 kubelet[2520]: I0114 13:22:52.820295 2520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:22:52.823466 kubelet[2520]: W0114 13:22:52.821325 2520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.4.36" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:22:52.823466 kubelet[2520]: E0114 13:22:52.821367 2520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.4.36" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:22:52.823466 kubelet[2520]: W0114 13:22:52.821484 2520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:22:52.823466 kubelet[2520]: E0114 13:22:52.821501 2520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:22:52.823466 kubelet[2520]: I0114 13:22:52.821559 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:22:52.823466 kubelet[2520]: I0114 13:22:52.821652 2520 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:22:52.823466 kubelet[2520]: I0114 13:22:52.821911 2520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:22:52.823466 kubelet[2520]: I0114 13:22:52.822658 2520 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:22:52.826957 kubelet[2520]: I0114 13:22:52.826936 2520 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:22:52.827055 kubelet[2520]: I0114 13:22:52.826983 2520 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:22:52.828383 kubelet[2520]: I0114 13:22:52.828358 2520 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:22:52.828478 kubelet[2520]: I0114 13:22:52.828456 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:22:52.831146 kubelet[2520]: I0114 13:22:52.831122 2520 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:22:52.838349 kubelet[2520]: E0114 13:22:52.838230 2520 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.36\" not found" node="10.200.4.36" Jan 14 13:22:52.844105 kubelet[2520]: I0114 13:22:52.844014 2520 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:22:52.844105 kubelet[2520]: I0114 13:22:52.844029 2520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:22:52.844105 kubelet[2520]: I0114 13:22:52.844046 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:22:52.851335 kubelet[2520]: I0114 13:22:52.851315 2520 policy_none.go:49] "None policy: Start" Jan 14 13:22:52.851855 kubelet[2520]: I0114 13:22:52.851836 2520 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:22:52.851921 kubelet[2520]: I0114 13:22:52.851890 2520 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:22:52.861994 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:22:52.872715 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:22:52.876204 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:22:52.885223 kubelet[2520]: I0114 13:22:52.884877 2520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:22:52.885223 kubelet[2520]: I0114 13:22:52.885149 2520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:22:52.885676 kubelet[2520]: I0114 13:22:52.885661 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:22:52.885993 kubelet[2520]: I0114 13:22:52.885947 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:22:52.887876 kubelet[2520]: I0114 13:22:52.887854 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:22:52.888713 kubelet[2520]: I0114 13:22:52.887897 2520 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:22:52.888713 kubelet[2520]: I0114 13:22:52.887917 2520 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:22:52.888713 kubelet[2520]: E0114 13:22:52.887960 2520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 14 13:22:52.894959 kubelet[2520]: E0114 13:22:52.894939 2520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.36\" not found" Jan 14 13:22:52.925201 kubelet[2520]: I0114 13:22:52.925145 2520 kubelet_node_status.go:73] "Attempting to register node" node="10.200.4.36" Jan 14 13:22:52.929407 kubelet[2520]: I0114 13:22:52.929379 2520 kubelet_node_status.go:76] "Successfully registered node" node="10.200.4.36" Jan 14 13:22:52.948669 kubelet[2520]: E0114 13:22:52.948641 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.049152 kubelet[2520]: E0114 13:22:53.048877 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.149620 kubelet[2520]: E0114 13:22:53.149562 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.250004 kubelet[2520]: E0114 13:22:53.249956 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.319929 sudo[2372]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:53.351128 kubelet[2520]: E0114 13:22:53.351034 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.425625 sshd[2371]: Connection closed by 10.200.16.10 port 56824 Jan 14 13:22:53.426422 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:53.430977 systemd[1]: sshd@6-10.200.4.36:22-10.200.16.10:56824.service: Deactivated successfully. Jan 14 13:22:53.433460 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:22:53.434487 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:22:53.435579 systemd-logind[1690]: Removed session 9. Jan 14 13:22:53.452055 kubelet[2520]: E0114 13:22:53.452016 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.552968 kubelet[2520]: E0114 13:22:53.552852 2520 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.36\" not found" Jan 14 13:22:53.653946 kubelet[2520]: I0114 13:22:53.653828 2520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 14 13:22:53.654190 containerd[1707]: time="2025-01-14T13:22:53.654152947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:22:53.654906 kubelet[2520]: I0114 13:22:53.654349 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 14 13:22:53.769826 kubelet[2520]: I0114 13:22:53.769789 2520 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 14 13:22:53.770013 kubelet[2520]: W0114 13:22:53.769990 2520 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:22:53.770126 kubelet[2520]: W0114 13:22:53.770094 2520 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:22:53.770126 kubelet[2520]: W0114 13:22:53.770117 2520 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:22:53.809272 kubelet[2520]: I0114 13:22:53.809196 2520 apiserver.go:52] "Watching apiserver" Jan 14 13:22:53.809483 kubelet[2520]: E0114 13:22:53.809213 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:53.818856 kubelet[2520]: I0114 13:22:53.818809 2520 topology_manager.go:215] "Topology Admit Handler" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" podNamespace="calico-system" podName="csi-node-driver-9zp74" Jan 14 13:22:53.820613 kubelet[2520]: I0114 13:22:53.818921 2520 topology_manager.go:215] "Topology Admit Handler" podUID="2579aa49-5793-4b01-837f-6c814116e577" podNamespace="kube-system" podName="kube-proxy-bf8c5" Jan 14 13:22:53.820613 kubelet[2520]: E0114 13:22:53.819091 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:22:53.820613 kubelet[2520]: I0114 13:22:53.819234 2520 topology_manager.go:215] "Topology Admit Handler" podUID="76134f0a-630b-4168-99e4-607e72c09538" podNamespace="calico-system" podName="calico-node-nszk7" Jan 14 13:22:53.830243 systemd[1]: Created slice kubepods-besteffort-pod76134f0a_630b_4168_99e4_607e72c09538.slice - libcontainer container kubepods-besteffort-pod76134f0a_630b_4168_99e4_607e72c09538.slice. Jan 14 13:22:53.832504 kubelet[2520]: I0114 13:22:53.832481 2520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:22:53.844127 systemd[1]: Created slice kubepods-besteffort-pod2579aa49_5793_4b01_837f_6c814116e577.slice - libcontainer container kubepods-besteffort-pod2579aa49_5793_4b01_837f_6c814116e577.slice. Jan 14 13:22:53.931833 kubelet[2520]: I0114 13:22:53.931459 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-cni-net-dir\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.931833 kubelet[2520]: I0114 13:22:53.931522 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-flexvol-driver-host\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.931833 kubelet[2520]: I0114 13:22:53.931558 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eeb30db5-16f4-4252-98a5-62dc3d0af113-varrun\") pod \"csi-node-driver-9zp74\" (UID: \"eeb30db5-16f4-4252-98a5-62dc3d0af113\") " pod="calico-system/csi-node-driver-9zp74" Jan 14 13:22:53.931833 kubelet[2520]: I0114 13:22:53.931589 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnc7\" (UniqueName: \"kubernetes.io/projected/2579aa49-5793-4b01-837f-6c814116e577-kube-api-access-6bnc7\") pod \"kube-proxy-bf8c5\" (UID: \"2579aa49-5793-4b01-837f-6c814116e577\") " pod="kube-system/kube-proxy-bf8c5" Jan 14 13:22:53.931833 kubelet[2520]: I0114 13:22:53.931617 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-lib-modules\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.932557 kubelet[2520]: I0114 13:22:53.931643 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76134f0a-630b-4168-99e4-607e72c09538-tigera-ca-bundle\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.932557 kubelet[2520]: I0114 13:22:53.931671 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-var-lib-calico\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.932557 kubelet[2520]: I0114 13:22:53.931712 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb30db5-16f4-4252-98a5-62dc3d0af113-kubelet-dir\") pod \"csi-node-driver-9zp74\" (UID: \"eeb30db5-16f4-4252-98a5-62dc3d0af113\") " pod="calico-system/csi-node-driver-9zp74" Jan 14 13:22:53.932557 kubelet[2520]: I0114 13:22:53.931769 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2579aa49-5793-4b01-837f-6c814116e577-kube-proxy\") pod \"kube-proxy-bf8c5\" (UID: \"2579aa49-5793-4b01-837f-6c814116e577\") " pod="kube-system/kube-proxy-bf8c5" Jan 14 13:22:53.932557 kubelet[2520]: I0114 13:22:53.931810 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2579aa49-5793-4b01-837f-6c814116e577-lib-modules\") pod \"kube-proxy-bf8c5\" (UID: \"2579aa49-5793-4b01-837f-6c814116e577\") " pod="kube-system/kube-proxy-bf8c5" Jan 14 13:22:53.933260 kubelet[2520]: I0114 13:22:53.931837 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-var-run-calico\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933260 kubelet[2520]: I0114 13:22:53.931867 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeb30db5-16f4-4252-98a5-62dc3d0af113-registration-dir\") pod \"csi-node-driver-9zp74\" (UID: \"eeb30db5-16f4-4252-98a5-62dc3d0af113\") " pod="calico-system/csi-node-driver-9zp74" Jan 14 13:22:53.933260 kubelet[2520]: I0114 13:22:53.931902 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-xtables-lock\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933260 kubelet[2520]: I0114 13:22:53.931930 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-cni-log-dir\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933260 kubelet[2520]: I0114 13:22:53.931957 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtzh\" (UniqueName: \"kubernetes.io/projected/76134f0a-630b-4168-99e4-607e72c09538-kube-api-access-vvtzh\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933641 kubelet[2520]: I0114 13:22:53.931987 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-cni-bin-dir\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933641 kubelet[2520]: I0114 13:22:53.932016 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeb30db5-16f4-4252-98a5-62dc3d0af113-socket-dir\") pod \"csi-node-driver-9zp74\" (UID: \"eeb30db5-16f4-4252-98a5-62dc3d0af113\") " pod="calico-system/csi-node-driver-9zp74" Jan 14 13:22:53.933641 kubelet[2520]: I0114 13:22:53.932045 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64dzv\" (UniqueName: \"kubernetes.io/projected/eeb30db5-16f4-4252-98a5-62dc3d0af113-kube-api-access-64dzv\") pod \"csi-node-driver-9zp74\" (UID: \"eeb30db5-16f4-4252-98a5-62dc3d0af113\") " pod="calico-system/csi-node-driver-9zp74" Jan 14 13:22:53.933641 kubelet[2520]: I0114 13:22:53.932089 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2579aa49-5793-4b01-837f-6c814116e577-xtables-lock\") pod \"kube-proxy-bf8c5\" (UID: \"2579aa49-5793-4b01-837f-6c814116e577\") " pod="kube-system/kube-proxy-bf8c5" Jan 14 13:22:53.933641 kubelet[2520]: I0114 13:22:53.932116 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/76134f0a-630b-4168-99e4-607e72c09538-policysync\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:53.933956 kubelet[2520]: I0114 13:22:53.932150 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/76134f0a-630b-4168-99e4-607e72c09538-node-certs\") pod \"calico-node-nszk7\" (UID: \"76134f0a-630b-4168-99e4-607e72c09538\") " pod="calico-system/calico-node-nszk7" Jan 14 13:22:54.040082 kubelet[2520]: E0114 13:22:54.040035 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.040405 kubelet[2520]: W0114 13:22:54.040059 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.040405 kubelet[2520]: E0114 13:22:54.040285 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.040722 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051013 kubelet[2520]: W0114 13:22:54.048903 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.048927 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.049206 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051013 kubelet[2520]: W0114 13:22:54.049218 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.049234 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.049431 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051013 kubelet[2520]: W0114 13:22:54.049441 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.049459 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051013 kubelet[2520]: E0114 13:22:54.049668 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051567 kubelet[2520]: W0114 13:22:54.049678 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.049698 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.049941 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051567 kubelet[2520]: W0114 13:22:54.049954 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.049970 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.050154 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051567 kubelet[2520]: W0114 13:22:54.050164 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.050182 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051567 kubelet[2520]: E0114 13:22:54.050342 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051567 kubelet[2520]: W0114 13:22:54.050355 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051990 kubelet[2520]: E0114 13:22:54.050374 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051990 kubelet[2520]: E0114 13:22:54.050563 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051990 kubelet[2520]: W0114 13:22:54.050578 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051990 kubelet[2520]: E0114 13:22:54.050589 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.051990 kubelet[2520]: E0114 13:22:54.051918 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.051990 kubelet[2520]: W0114 13:22:54.051932 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.051990 kubelet[2520]: E0114 13:22:54.051947 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.052294 kubelet[2520]: E0114 13:22:54.052154 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.052294 kubelet[2520]: W0114 13:22:54.052164 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.052294 kubelet[2520]: E0114 13:22:54.052177 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.052415 kubelet[2520]: E0114 13:22:54.052355 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.052415 kubelet[2520]: W0114 13:22:54.052365 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.052415 kubelet[2520]: E0114 13:22:54.052383 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.052553 kubelet[2520]: E0114 13:22:54.052546 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.052597 kubelet[2520]: W0114 13:22:54.052556 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.052597 kubelet[2520]: E0114 13:22:54.052573 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.052791 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.059774 kubelet[2520]: W0114 13:22:54.052806 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.052828 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.053262 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.059774 kubelet[2520]: W0114 13:22:54.053273 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.053288 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.053477 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.059774 kubelet[2520]: W0114 13:22:54.053488 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.053502 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.059774 kubelet[2520]: E0114 13:22:54.053687 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.062777 kubelet[2520]: W0114 13:22:54.053699 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.053715 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.054004 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.062777 kubelet[2520]: W0114 13:22:54.054015 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.054037 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.054223 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.062777 kubelet[2520]: W0114 13:22:54.054233 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.054255 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.062777 kubelet[2520]: E0114 13:22:54.054428 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.062777 kubelet[2520]: W0114 13:22:54.054441 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.054459 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.054611 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.063680 kubelet[2520]: W0114 13:22:54.054624 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.054637 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.054878 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.063680 kubelet[2520]: W0114 13:22:54.054889 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.054902 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.055070 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.063680 kubelet[2520]: W0114 13:22:54.055077 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.063680 kubelet[2520]: E0114 13:22:54.055089 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055236 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069551 kubelet[2520]: W0114 13:22:54.055244 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055254 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055402 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069551 kubelet[2520]: W0114 13:22:54.055410 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055419 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055604 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069551 kubelet[2520]: W0114 13:22:54.055613 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055629 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069551 kubelet[2520]: E0114 13:22:54.055836 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069971 kubelet[2520]: W0114 13:22:54.055846 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.055865 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.056009 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069971 kubelet[2520]: W0114 13:22:54.056023 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.056033 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.056209 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069971 kubelet[2520]: W0114 13:22:54.056219 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.056233 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.069971 kubelet[2520]: E0114 13:22:54.056656 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.069971 kubelet[2520]: W0114 13:22:54.056668 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.056691 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.056910 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.070331 kubelet[2520]: W0114 13:22:54.056990 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.057012 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.064228 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.070331 kubelet[2520]: W0114 13:22:54.064243 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.064262 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.067860 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:54.070331 kubelet[2520]: W0114 13:22:54.067879 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:54.070331 kubelet[2520]: E0114 13:22:54.067899 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:54.141979 containerd[1707]: time="2025-01-14T13:22:54.141923651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nszk7,Uid:76134f0a-630b-4168-99e4-607e72c09538,Namespace:calico-system,Attempt:0,}" Jan 14 13:22:54.147477 containerd[1707]: time="2025-01-14T13:22:54.147441692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf8c5,Uid:2579aa49-5793-4b01-837f-6c814116e577,Namespace:kube-system,Attempt:0,}" Jan 14 13:22:54.809852 kubelet[2520]: E0114 13:22:54.809815 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:54.989561 containerd[1707]: time="2025-01-14T13:22:54.989510953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:54.996463 containerd[1707]: time="2025-01-14T13:22:54.996410389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:22:54.998912 containerd[1707]: time="2025-01-14T13:22:54.998877274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:55.003067 containerd[1707]: time="2025-01-14T13:22:55.003032116Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:55.005819 containerd[1707]: time="2025-01-14T13:22:55.005783611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:22:55.009577 containerd[1707]: time="2025-01-14T13:22:55.009529739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:55.012462 containerd[1707]: time="2025-01-14T13:22:55.012237432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 864.682134ms" Jan 14 13:22:55.018231 containerd[1707]: time="2025-01-14T13:22:55.018196636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 876.148879ms" Jan 14 13:22:55.042626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795012347.mount: Deactivated successfully. Jan 14 13:22:55.810223 kubelet[2520]: E0114 13:22:55.810184 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861278344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861325245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861343746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861264043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861319545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861335146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:55.861758 containerd[1707]: time="2025-01-14T13:22:55.861417048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:55.862748 containerd[1707]: time="2025-01-14T13:22:55.862583188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:55.888670 kubelet[2520]: E0114 13:22:55.888618 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:22:56.240912 systemd[1]: Started cri-containerd-86a82f7d608ad7e25136739f3329883c64f2b31266bb88ddf4efdd6335b88c26.scope - libcontainer container 86a82f7d608ad7e25136739f3329883c64f2b31266bb88ddf4efdd6335b88c26. Jan 14 13:22:56.243702 systemd[1]: Started cri-containerd-8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6.scope - libcontainer container 8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6. Jan 14 13:22:56.277260 containerd[1707]: time="2025-01-14T13:22:56.277177204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf8c5,Uid:2579aa49-5793-4b01-837f-6c814116e577,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a82f7d608ad7e25136739f3329883c64f2b31266bb88ddf4efdd6335b88c26\"" Jan 14 13:22:56.281410 containerd[1707]: time="2025-01-14T13:22:56.281367448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 14 13:22:56.283139 containerd[1707]: time="2025-01-14T13:22:56.283101607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nszk7,Uid:76134f0a-630b-4168-99e4-607e72c09538,Namespace:calico-system,Attempt:0,} returns sandbox id \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\"" Jan 14 13:22:56.810671 kubelet[2520]: E0114 13:22:56.810623 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:57.683694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249961778.mount: Deactivated successfully. Jan 14 13:22:57.811329 kubelet[2520]: E0114 13:22:57.811279 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:57.889105 kubelet[2520]: E0114 13:22:57.889028 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:22:58.189623 containerd[1707]: time="2025-01-14T13:22:58.189573176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:58.192798 containerd[1707]: time="2025-01-14T13:22:58.192618980Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 14 13:22:58.197057 containerd[1707]: time="2025-01-14T13:22:58.196840525Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:58.201070 containerd[1707]: time="2025-01-14T13:22:58.201016768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:58.201809 containerd[1707]: time="2025-01-14T13:22:58.201597288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.920057335s" Jan 14 13:22:58.201809 containerd[1707]: time="2025-01-14T13:22:58.201633489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 14 13:22:58.202966 containerd[1707]: time="2025-01-14T13:22:58.202938234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 14 13:22:58.204558 containerd[1707]: time="2025-01-14T13:22:58.204479087Z" level=info msg="CreateContainer within sandbox \"86a82f7d608ad7e25136739f3329883c64f2b31266bb88ddf4efdd6335b88c26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:22:58.272528 containerd[1707]: time="2025-01-14T13:22:58.272488819Z" level=info msg="CreateContainer within sandbox \"86a82f7d608ad7e25136739f3329883c64f2b31266bb88ddf4efdd6335b88c26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf3ae4138623d5ea3252ab5e4f6952fdb82c9d80d9959abdaacb44ce772ffc93\"" Jan 14 13:22:58.273073 containerd[1707]: time="2025-01-14T13:22:58.273042138Z" level=info msg="StartContainer for \"cf3ae4138623d5ea3252ab5e4f6952fdb82c9d80d9959abdaacb44ce772ffc93\"" Jan 14 13:22:58.306896 systemd[1]: Started cri-containerd-cf3ae4138623d5ea3252ab5e4f6952fdb82c9d80d9959abdaacb44ce772ffc93.scope - libcontainer container cf3ae4138623d5ea3252ab5e4f6952fdb82c9d80d9959abdaacb44ce772ffc93. Jan 14 13:22:58.338481 containerd[1707]: time="2025-01-14T13:22:58.338429280Z" level=info msg="StartContainer for \"cf3ae4138623d5ea3252ab5e4f6952fdb82c9d80d9959abdaacb44ce772ffc93\" returns successfully" Jan 14 13:22:58.812187 kubelet[2520]: E0114 13:22:58.812059 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:58.926392 kubelet[2520]: I0114 13:22:58.926318 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bf8c5" podStartSLOduration=5.003952623 podStartE2EDuration="6.926296336s" podCreationTimestamp="2025-01-14 13:22:52 +0000 UTC" firstStartedPulling="2025-01-14 13:22:56.280332912 +0000 UTC m=+4.479849570" lastFinishedPulling="2025-01-14 13:22:58.202676625 +0000 UTC m=+6.402193283" observedRunningTime="2025-01-14 13:22:58.925281502 +0000 UTC m=+7.124798160" watchObservedRunningTime="2025-01-14 13:22:58.926296336 +0000 UTC m=+7.125812994" Jan 14 13:22:58.967960 kubelet[2520]: E0114 13:22:58.967912 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.967960 kubelet[2520]: W0114 13:22:58.967951 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.968357 kubelet[2520]: E0114 13:22:58.967979 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.968357 kubelet[2520]: E0114 13:22:58.968250 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.968357 kubelet[2520]: W0114 13:22:58.968266 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.968357 kubelet[2520]: E0114 13:22:58.968282 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.968679 kubelet[2520]: E0114 13:22:58.968501 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.968679 kubelet[2520]: W0114 13:22:58.968514 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.968679 kubelet[2520]: E0114 13:22:58.968529 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.969064 kubelet[2520]: E0114 13:22:58.968774 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.969064 kubelet[2520]: W0114 13:22:58.968787 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.969064 kubelet[2520]: E0114 13:22:58.968802 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.969064 kubelet[2520]: E0114 13:22:58.969047 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.969064 kubelet[2520]: W0114 13:22:58.969061 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.969454 kubelet[2520]: E0114 13:22:58.969076 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.969454 kubelet[2520]: E0114 13:22:58.969282 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.969454 kubelet[2520]: W0114 13:22:58.969292 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.969454 kubelet[2520]: E0114 13:22:58.969303 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.969701 kubelet[2520]: E0114 13:22:58.969469 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.969701 kubelet[2520]: W0114 13:22:58.969479 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.969701 kubelet[2520]: E0114 13:22:58.969491 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.969701 kubelet[2520]: E0114 13:22:58.969677 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.969701 kubelet[2520]: W0114 13:22:58.969686 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.969701 kubelet[2520]: E0114 13:22:58.969700 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.970097 kubelet[2520]: E0114 13:22:58.969894 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.970097 kubelet[2520]: W0114 13:22:58.969904 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.970097 kubelet[2520]: E0114 13:22:58.969916 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.970097 kubelet[2520]: E0114 13:22:58.970080 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.970097 kubelet[2520]: W0114 13:22:58.970090 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.970454 kubelet[2520]: E0114 13:22:58.970102 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.970454 kubelet[2520]: E0114 13:22:58.970285 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.970454 kubelet[2520]: W0114 13:22:58.970296 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.970454 kubelet[2520]: E0114 13:22:58.970307 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.970694 kubelet[2520]: E0114 13:22:58.970479 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.970694 kubelet[2520]: W0114 13:22:58.970488 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.970694 kubelet[2520]: E0114 13:22:58.970500 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.970694 kubelet[2520]: E0114 13:22:58.970682 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.970694 kubelet[2520]: W0114 13:22:58.970690 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971083 kubelet[2520]: E0114 13:22:58.970702 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971083 kubelet[2520]: E0114 13:22:58.970905 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971083 kubelet[2520]: W0114 13:22:58.970915 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971083 kubelet[2520]: E0114 13:22:58.970926 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971343 kubelet[2520]: E0114 13:22:58.971099 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971343 kubelet[2520]: W0114 13:22:58.971109 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971343 kubelet[2520]: E0114 13:22:58.971121 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971343 kubelet[2520]: E0114 13:22:58.971296 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971343 kubelet[2520]: W0114 13:22:58.971307 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971343 kubelet[2520]: E0114 13:22:58.971319 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971692 kubelet[2520]: E0114 13:22:58.971496 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971692 kubelet[2520]: W0114 13:22:58.971505 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971692 kubelet[2520]: E0114 13:22:58.971517 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971692 kubelet[2520]: E0114 13:22:58.971687 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971925 kubelet[2520]: W0114 13:22:58.971696 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.971925 kubelet[2520]: E0114 13:22:58.971708 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.971925 kubelet[2520]: E0114 13:22:58.971908 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.971925 kubelet[2520]: W0114 13:22:58.971918 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.972161 kubelet[2520]: E0114 13:22:58.971930 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:58.972161 kubelet[2520]: E0114 13:22:58.972110 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:58.972161 kubelet[2520]: W0114 13:22:58.972118 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:58.972161 kubelet[2520]: E0114 13:22:58.972130 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.068553 kubelet[2520]: E0114 13:22:59.068428 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.068553 kubelet[2520]: W0114 13:22:59.068455 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.068553 kubelet[2520]: E0114 13:22:59.068482 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.068862 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070237 kubelet[2520]: W0114 13:22:59.068876 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.068903 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.069343 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070237 kubelet[2520]: W0114 13:22:59.069356 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.069384 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.069654 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070237 kubelet[2520]: W0114 13:22:59.069665 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.069686 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070237 kubelet[2520]: E0114 13:22:59.069962 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070718 kubelet[2520]: W0114 13:22:59.069972 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070718 kubelet[2520]: E0114 13:22:59.069988 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070718 kubelet[2520]: E0114 13:22:59.070267 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070718 kubelet[2520]: W0114 13:22:59.070279 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070718 kubelet[2520]: E0114 13:22:59.070366 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.070960 kubelet[2520]: E0114 13:22:59.070796 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.070960 kubelet[2520]: W0114 13:22:59.070807 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.070960 kubelet[2520]: E0114 13:22:59.070825 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.071086 kubelet[2520]: E0114 13:22:59.071010 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.071086 kubelet[2520]: W0114 13:22:59.071019 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.071086 kubelet[2520]: E0114 13:22:59.071031 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.071226 kubelet[2520]: E0114 13:22:59.071214 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.071226 kubelet[2520]: W0114 13:22:59.071223 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.071349 kubelet[2520]: E0114 13:22:59.071270 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.071896 kubelet[2520]: E0114 13:22:59.071510 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.071896 kubelet[2520]: W0114 13:22:59.071524 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.071896 kubelet[2520]: E0114 13:22:59.071552 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.071896 kubelet[2520]: E0114 13:22:59.071781 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.071896 kubelet[2520]: W0114 13:22:59.071792 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.071896 kubelet[2520]: E0114 13:22:59.071808 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.072189 kubelet[2520]: E0114 13:22:59.072179 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:59.072229 kubelet[2520]: W0114 13:22:59.072190 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:59.072229 kubelet[2520]: E0114 13:22:59.072203 2520 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:59.477239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437550385.mount: Deactivated successfully. Jan 14 13:22:59.660115 containerd[1707]: time="2025-01-14T13:22:59.660057119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:59.663822 containerd[1707]: time="2025-01-14T13:22:59.663771378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 14 13:22:59.666870 containerd[1707]: time="2025-01-14T13:22:59.666816909Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:59.672006 containerd[1707]: time="2025-01-14T13:22:59.671956529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:59.672786 containerd[1707]: time="2025-01-14T13:22:59.672607556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.469631521s" Jan 14 13:22:59.672786 containerd[1707]: time="2025-01-14T13:22:59.672645758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 14 13:22:59.674985 containerd[1707]: time="2025-01-14T13:22:59.674959457Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:22:59.718234 containerd[1707]: time="2025-01-14T13:22:59.718192007Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd\"" Jan 14 13:22:59.718760 containerd[1707]: time="2025-01-14T13:22:59.718611625Z" level=info msg="StartContainer for \"bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd\"" Jan 14 13:22:59.747900 systemd[1]: Started cri-containerd-bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd.scope - libcontainer container bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd. Jan 14 13:22:59.786058 containerd[1707]: time="2025-01-14T13:22:59.785931305Z" level=info msg="StartContainer for \"bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd\" returns successfully" Jan 14 13:22:59.786916 systemd[1]: cri-containerd-bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd.scope: Deactivated successfully. Jan 14 13:22:59.813004 kubelet[2520]: E0114 13:22:59.812931 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:59.888820 kubelet[2520]: E0114 13:22:59.888700 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:00.442064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd-rootfs.mount: Deactivated successfully. Jan 14 13:23:00.694100 containerd[1707]: time="2025-01-14T13:23:00.693923556Z" level=info msg="shim disconnected" id=bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd namespace=k8s.io Jan 14 13:23:00.694100 containerd[1707]: time="2025-01-14T13:23:00.693988759Z" level=warning msg="cleaning up after shim disconnected" id=bbf74a66385b2295ea279c9565454c5e9698b8f583df80c03bc152ec37ad70bd namespace=k8s.io Jan 14 13:23:00.694100 containerd[1707]: time="2025-01-14T13:23:00.693999560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:00.813220 kubelet[2520]: E0114 13:23:00.813184 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:00.919476 containerd[1707]: time="2025-01-14T13:23:00.919416205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 14 13:23:01.814175 kubelet[2520]: E0114 13:23:01.814113 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:01.888886 kubelet[2520]: E0114 13:23:01.888821 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:02.815430 kubelet[2520]: E0114 13:23:02.815307 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:03.816351 kubelet[2520]: E0114 13:23:03.816284 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:03.888876 kubelet[2520]: E0114 13:23:03.888826 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:04.766639 containerd[1707]: time="2025-01-14T13:23:04.766587889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:04.769157 containerd[1707]: time="2025-01-14T13:23:04.769096996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 14 13:23:04.771916 containerd[1707]: time="2025-01-14T13:23:04.771867614Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:04.775643 containerd[1707]: time="2025-01-14T13:23:04.775615074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:04.776374 containerd[1707]: time="2025-01-14T13:23:04.776247101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.856782394s" Jan 14 13:23:04.776374 containerd[1707]: time="2025-01-14T13:23:04.776281702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 14 13:23:04.778529 containerd[1707]: time="2025-01-14T13:23:04.778504397Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:23:04.816818 kubelet[2520]: E0114 13:23:04.816776 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:04.823537 containerd[1707]: time="2025-01-14T13:23:04.823495214Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45\"" Jan 14 13:23:04.824073 containerd[1707]: time="2025-01-14T13:23:04.823903431Z" level=info msg="StartContainer for \"2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45\"" Jan 14 13:23:04.857868 systemd[1]: Started cri-containerd-2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45.scope - libcontainer container 2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45. Jan 14 13:23:04.891928 containerd[1707]: time="2025-01-14T13:23:04.891873727Z" level=info msg="StartContainer for \"2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45\" returns successfully" Jan 14 13:23:05.817455 kubelet[2520]: E0114 13:23:05.817393 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:05.888788 kubelet[2520]: E0114 13:23:05.888673 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:06.316560 containerd[1707]: time="2025-01-14T13:23:06.316509925Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:23:06.318879 systemd[1]: cri-containerd-2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45.scope: Deactivated successfully. Jan 14 13:23:06.333338 kubelet[2520]: I0114 13:23:06.333302 2520 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:23:06.345178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45-rootfs.mount: Deactivated successfully. Jan 14 13:23:06.881447 kubelet[2520]: E0114 13:23:06.818158 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:07.818890 kubelet[2520]: E0114 13:23:07.818833 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:07.894336 systemd[1]: Created slice kubepods-besteffort-podeeb30db5_16f4_4252_98a5_62dc3d0af113.slice - libcontainer container kubepods-besteffort-podeeb30db5_16f4_4252_98a5_62dc3d0af113.slice. Jan 14 13:23:07.896912 containerd[1707]: time="2025-01-14T13:23:07.896865858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:0,}" Jan 14 13:23:08.037532 containerd[1707]: time="2025-01-14T13:23:08.037465248Z" level=info msg="shim disconnected" id=2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45 namespace=k8s.io Jan 14 13:23:08.037532 containerd[1707]: time="2025-01-14T13:23:08.037523151Z" level=warning msg="cleaning up after shim disconnected" id=2009fbb7095e305d5c90e253542c3106bfb5d769ef96977c72f86dc18d893b45 namespace=k8s.io Jan 14 13:23:08.037532 containerd[1707]: time="2025-01-14T13:23:08.037543452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:08.133817 containerd[1707]: time="2025-01-14T13:23:08.131518556Z" level=error msg="Failed to destroy network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.133563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b-shm.mount: Deactivated successfully. Jan 14 13:23:08.134389 containerd[1707]: time="2025-01-14T13:23:08.134333475Z" level=error msg="encountered an error cleaning up failed sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.134483 containerd[1707]: time="2025-01-14T13:23:08.134434480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.135248 kubelet[2520]: E0114 13:23:08.135148 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.135248 kubelet[2520]: E0114 13:23:08.135226 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:08.135633 kubelet[2520]: E0114 13:23:08.135253 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:08.135633 kubelet[2520]: E0114 13:23:08.135312 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:08.223711 kubelet[2520]: I0114 13:23:08.223663 2520 topology_manager.go:215] "Topology Admit Handler" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" podNamespace="default" podName="nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:08.229443 systemd[1]: Created slice kubepods-besteffort-podf2f9ee65_c7a1_4d7b_b082_219bcf7b0367.slice - libcontainer container kubepods-besteffort-podf2f9ee65_c7a1_4d7b_b082_219bcf7b0367.slice. Jan 14 13:23:08.327532 kubelet[2520]: I0114 13:23:08.327469 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4kg\" (UniqueName: \"kubernetes.io/projected/f2f9ee65-c7a1-4d7b-b082-219bcf7b0367-kube-api-access-ds4kg\") pod \"nginx-deployment-85f456d6dd-mmtbd\" (UID: \"f2f9ee65-c7a1-4d7b-b082-219bcf7b0367\") " pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:08.533258 containerd[1707]: time="2025-01-14T13:23:08.533206470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:0,}" Jan 14 13:23:08.621587 containerd[1707]: time="2025-01-14T13:23:08.621530833Z" level=error msg="Failed to destroy network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.621900 containerd[1707]: time="2025-01-14T13:23:08.621863747Z" level=error msg="encountered an error cleaning up failed sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.622185 containerd[1707]: time="2025-01-14T13:23:08.621938550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.622301 kubelet[2520]: E0114 13:23:08.622190 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.622301 kubelet[2520]: E0114 13:23:08.622261 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:08.622301 kubelet[2520]: E0114 13:23:08.622288 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:08.622430 kubelet[2520]: E0114 13:23:08.622344 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:08.819266 kubelet[2520]: E0114 13:23:08.819110 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:08.939542 containerd[1707]: time="2025-01-14T13:23:08.938905655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 14 13:23:08.940031 kubelet[2520]: I0114 13:23:08.938988 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001" Jan 14 13:23:08.941086 containerd[1707]: time="2025-01-14T13:23:08.940571326Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:08.941086 containerd[1707]: time="2025-01-14T13:23:08.940914741Z" level=info msg="Ensure that sandbox 8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001 in task-service has been cleanup successfully" Jan 14 13:23:08.941476 containerd[1707]: time="2025-01-14T13:23:08.941330658Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:08.941476 containerd[1707]: time="2025-01-14T13:23:08.941355959Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:08.942544 containerd[1707]: time="2025-01-14T13:23:08.942506309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:1,}" Jan 14 13:23:08.944089 kubelet[2520]: I0114 13:23:08.944053 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b" Jan 14 13:23:08.947664 containerd[1707]: time="2025-01-14T13:23:08.946989099Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:08.947664 containerd[1707]: time="2025-01-14T13:23:08.947230510Z" level=info msg="Ensure that sandbox 7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b in task-service has been cleanup successfully" Jan 14 13:23:08.947664 containerd[1707]: time="2025-01-14T13:23:08.947384716Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:08.947664 containerd[1707]: time="2025-01-14T13:23:08.947399917Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:08.949021 containerd[1707]: time="2025-01-14T13:23:08.948993585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:1,}" Jan 14 13:23:09.078415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001-shm.mount: Deactivated successfully. Jan 14 13:23:09.079021 systemd[1]: run-netns-cni\x2d5e98d537\x2d5eb5\x2dfa99\x2d8be0\x2d430e8cf8941e.mount: Deactivated successfully. Jan 14 13:23:09.090165 containerd[1707]: time="2025-01-14T13:23:09.089580375Z" level=error msg="Failed to destroy network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.091088 containerd[1707]: time="2025-01-14T13:23:09.091032237Z" level=error msg="encountered an error cleaning up failed sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.091182 containerd[1707]: time="2025-01-14T13:23:09.091128441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.092871 kubelet[2520]: E0114 13:23:09.091387 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.092871 kubelet[2520]: E0114 13:23:09.091458 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:09.092871 kubelet[2520]: E0114 13:23:09.091485 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:09.092696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5-shm.mount: Deactivated successfully. Jan 14 13:23:09.093093 kubelet[2520]: E0114 13:23:09.091560 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:09.106122 containerd[1707]: time="2025-01-14T13:23:09.106080578Z" level=error msg="Failed to destroy network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.108282 containerd[1707]: time="2025-01-14T13:23:09.107995659Z" level=error msg="encountered an error cleaning up failed sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.108282 containerd[1707]: time="2025-01-14T13:23:09.108073763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.108509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80-shm.mount: Deactivated successfully. Jan 14 13:23:09.109002 kubelet[2520]: E0114 13:23:09.108868 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.109271 kubelet[2520]: E0114 13:23:09.108941 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:09.109271 kubelet[2520]: E0114 13:23:09.109162 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:09.109584 kubelet[2520]: E0114 13:23:09.109242 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:09.819547 kubelet[2520]: E0114 13:23:09.819503 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:09.947718 kubelet[2520]: I0114 13:23:09.947681 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80" Jan 14 13:23:09.948561 containerd[1707]: time="2025-01-14T13:23:09.948475869Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:09.950843 containerd[1707]: time="2025-01-14T13:23:09.948757081Z" level=info msg="Ensure that sandbox 7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80 in task-service has been cleanup successfully" Jan 14 13:23:09.950843 containerd[1707]: time="2025-01-14T13:23:09.949115196Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:09.950843 containerd[1707]: time="2025-01-14T13:23:09.949136997Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:09.951189 systemd[1]: run-netns-cni\x2d8f7787d5\x2d4aa8\x2d385e\x2d0dc2\x2d8f8624fff396.mount: Deactivated successfully. Jan 14 13:23:09.951967 containerd[1707]: time="2025-01-14T13:23:09.951939016Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:09.952062 containerd[1707]: time="2025-01-14T13:23:09.952032720Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:09.952062 containerd[1707]: time="2025-01-14T13:23:09.952046921Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:09.952827 containerd[1707]: time="2025-01-14T13:23:09.952509841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:2,}" Jan 14 13:23:09.952995 kubelet[2520]: I0114 13:23:09.952537 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5" Jan 14 13:23:09.954905 containerd[1707]: time="2025-01-14T13:23:09.953330776Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:09.954905 containerd[1707]: time="2025-01-14T13:23:09.953543285Z" level=info msg="Ensure that sandbox 7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5 in task-service has been cleanup successfully" Jan 14 13:23:09.956600 containerd[1707]: time="2025-01-14T13:23:09.956087393Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:09.956600 containerd[1707]: time="2025-01-14T13:23:09.956127395Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:09.956619 systemd[1]: run-netns-cni\x2d44c68ae2\x2dcd09\x2d2772\x2d800b\x2d9faf008ddc91.mount: Deactivated successfully. Jan 14 13:23:09.957506 containerd[1707]: time="2025-01-14T13:23:09.956989432Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:09.957506 containerd[1707]: time="2025-01-14T13:23:09.957073535Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:09.957506 containerd[1707]: time="2025-01-14T13:23:09.957088336Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:09.957656 containerd[1707]: time="2025-01-14T13:23:09.957630059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:2,}" Jan 14 13:23:10.173359 containerd[1707]: time="2025-01-14T13:23:10.170382923Z" level=error msg="Failed to destroy network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.173359 containerd[1707]: time="2025-01-14T13:23:10.170778140Z" level=error msg="encountered an error cleaning up failed sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.173359 containerd[1707]: time="2025-01-14T13:23:10.170853744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.173584 kubelet[2520]: E0114 13:23:10.172902 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.173584 kubelet[2520]: E0114 13:23:10.172963 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:10.173584 kubelet[2520]: E0114 13:23:10.172994 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:10.173853 kubelet[2520]: E0114 13:23:10.173044 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:10.174581 containerd[1707]: time="2025-01-14T13:23:10.174300890Z" level=error msg="Failed to destroy network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.175170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703-shm.mount: Deactivated successfully. Jan 14 13:23:10.177114 containerd[1707]: time="2025-01-14T13:23:10.176954603Z" level=error msg="encountered an error cleaning up failed sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.177114 containerd[1707]: time="2025-01-14T13:23:10.177022806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.177428 kubelet[2520]: E0114 13:23:10.177386 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.177644 kubelet[2520]: E0114 13:23:10.177531 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:10.177644 kubelet[2520]: E0114 13:23:10.177561 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:10.177644 kubelet[2520]: E0114 13:23:10.177603 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:10.179419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691-shm.mount: Deactivated successfully. Jan 14 13:23:10.819913 kubelet[2520]: E0114 13:23:10.819794 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:10.955779 kubelet[2520]: I0114 13:23:10.955743 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691" Jan 14 13:23:10.957102 containerd[1707]: time="2025-01-14T13:23:10.956572520Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:10.957102 containerd[1707]: time="2025-01-14T13:23:10.956851932Z" level=info msg="Ensure that sandbox d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691 in task-service has been cleanup successfully" Jan 14 13:23:10.960187 kubelet[2520]: I0114 13:23:10.959646 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960098770Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960122871Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960154473Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960380182Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960463586Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960478486Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:10.960598 containerd[1707]: time="2025-01-14T13:23:10.960572190Z" level=info msg="Ensure that sandbox f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703 in task-service has been cleanup successfully" Jan 14 13:23:10.960944 containerd[1707]: time="2025-01-14T13:23:10.960796400Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:10.960944 containerd[1707]: time="2025-01-14T13:23:10.960815601Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:10.961021 systemd[1]: run-netns-cni\x2d1b78b52a\x2d8b38\x2d7c9b\x2d63d8\x2d1d3856b571a1.mount: Deactivated successfully. Jan 14 13:23:10.961433 containerd[1707]: time="2025-01-14T13:23:10.961398926Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:10.961530 containerd[1707]: time="2025-01-14T13:23:10.961505230Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:10.961590 containerd[1707]: time="2025-01-14T13:23:10.961530131Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:10.961637 containerd[1707]: time="2025-01-14T13:23:10.961403026Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:10.961685 containerd[1707]: time="2025-01-14T13:23:10.961675137Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:10.962873 containerd[1707]: time="2025-01-14T13:23:10.961689938Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:10.962873 containerd[1707]: time="2025-01-14T13:23:10.962168958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:3,}" Jan 14 13:23:10.962873 containerd[1707]: time="2025-01-14T13:23:10.962428769Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:10.962873 containerd[1707]: time="2025-01-14T13:23:10.962515673Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:10.962873 containerd[1707]: time="2025-01-14T13:23:10.962566275Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:10.965301 containerd[1707]: time="2025-01-14T13:23:10.965049381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:3,}" Jan 14 13:23:10.965448 systemd[1]: run-netns-cni\x2daf46ef27\x2d247f\x2d17cf\x2d1ddf\x2dce0891f2098a.mount: Deactivated successfully. Jan 14 13:23:11.820264 kubelet[2520]: E0114 13:23:11.820107 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:11.958985 containerd[1707]: time="2025-01-14T13:23:11.958771685Z" level=error msg="Failed to destroy network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.962073 containerd[1707]: time="2025-01-14T13:23:11.961703510Z" level=error msg="encountered an error cleaning up failed sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.962073 containerd[1707]: time="2025-01-14T13:23:11.961982622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.963764 kubelet[2520]: E0114 13:23:11.963380 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.963764 kubelet[2520]: E0114 13:23:11.963482 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:11.963764 kubelet[2520]: E0114 13:23:11.963509 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:11.964011 kubelet[2520]: E0114 13:23:11.963603 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:11.998577 containerd[1707]: time="2025-01-14T13:23:11.998522977Z" level=error msg="Failed to destroy network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.998921 containerd[1707]: time="2025-01-14T13:23:11.998888193Z" level=error msg="encountered an error cleaning up failed sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.999041 containerd[1707]: time="2025-01-14T13:23:11.998962396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.999747 kubelet[2520]: E0114 13:23:11.999226 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.999747 kubelet[2520]: E0114 13:23:11.999293 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:11.999747 kubelet[2520]: E0114 13:23:11.999321 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:11.999940 kubelet[2520]: E0114 13:23:11.999373 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:12.806491 kubelet[2520]: E0114 13:23:12.806453 2520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:12.811290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1-shm.mount: Deactivated successfully. Jan 14 13:23:12.811432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd-shm.mount: Deactivated successfully. Jan 14 13:23:12.821422 kubelet[2520]: E0114 13:23:12.821118 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:12.967318 kubelet[2520]: I0114 13:23:12.967135 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd" Jan 14 13:23:12.968065 containerd[1707]: time="2025-01-14T13:23:12.968022049Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:12.968538 containerd[1707]: time="2025-01-14T13:23:12.968223258Z" level=info msg="Ensure that sandbox c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd in task-service has been cleanup successfully" Jan 14 13:23:12.971014 systemd[1]: run-netns-cni\x2d017cb763\x2defa1\x2dae5b\x2dcb9d\x2d1620894c7be3.mount: Deactivated successfully. Jan 14 13:23:12.972846 containerd[1707]: time="2025-01-14T13:23:12.971417194Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:12.972846 containerd[1707]: time="2025-01-14T13:23:12.971449095Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:12.973826 containerd[1707]: time="2025-01-14T13:23:12.973728392Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:12.973906 containerd[1707]: time="2025-01-14T13:23:12.973843797Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:12.973906 containerd[1707]: time="2025-01-14T13:23:12.973859098Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:12.974450 containerd[1707]: time="2025-01-14T13:23:12.974424722Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:12.974534 containerd[1707]: time="2025-01-14T13:23:12.974508725Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:12.974534 containerd[1707]: time="2025-01-14T13:23:12.974523726Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:12.975482 containerd[1707]: time="2025-01-14T13:23:12.975459666Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:12.975796 containerd[1707]: time="2025-01-14T13:23:12.975772279Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:12.975796 containerd[1707]: time="2025-01-14T13:23:12.975790580Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:12.977772 containerd[1707]: time="2025-01-14T13:23:12.977282243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:4,}" Jan 14 13:23:12.977858 kubelet[2520]: I0114 13:23:12.977333 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1" Jan 14 13:23:12.978358 containerd[1707]: time="2025-01-14T13:23:12.978319988Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:12.978551 containerd[1707]: time="2025-01-14T13:23:12.978527396Z" level=info msg="Ensure that sandbox fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1 in task-service has been cleanup successfully" Jan 14 13:23:12.979824 containerd[1707]: time="2025-01-14T13:23:12.979154823Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:12.979824 containerd[1707]: time="2025-01-14T13:23:12.979176124Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:12.980044 containerd[1707]: time="2025-01-14T13:23:12.979936956Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:12.981609 containerd[1707]: time="2025-01-14T13:23:12.980970700Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:12.981609 containerd[1707]: time="2025-01-14T13:23:12.980993201Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:12.982084 containerd[1707]: time="2025-01-14T13:23:12.982060347Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:12.982172 containerd[1707]: time="2025-01-14T13:23:12.982144250Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:12.982172 containerd[1707]: time="2025-01-14T13:23:12.982158851Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:12.982632 systemd[1]: run-netns-cni\x2dc44f864e\x2d76a2\x2d7779\x2d39d5\x2da1d5ef06db05.mount: Deactivated successfully. Jan 14 13:23:12.983175 containerd[1707]: time="2025-01-14T13:23:12.982938384Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:12.983175 containerd[1707]: time="2025-01-14T13:23:12.983019988Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:12.983175 containerd[1707]: time="2025-01-14T13:23:12.983034088Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:12.984104 containerd[1707]: time="2025-01-14T13:23:12.983912026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:4,}" Jan 14 13:23:13.186176 containerd[1707]: time="2025-01-14T13:23:13.186014229Z" level=error msg="Failed to destroy network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.186875 containerd[1707]: time="2025-01-14T13:23:13.186678958Z" level=error msg="encountered an error cleaning up failed sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.186875 containerd[1707]: time="2025-01-14T13:23:13.186814063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.187652 kubelet[2520]: E0114 13:23:13.187253 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.187652 kubelet[2520]: E0114 13:23:13.187309 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:13.187652 kubelet[2520]: E0114 13:23:13.187336 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:13.187897 kubelet[2520]: E0114 13:23:13.187389 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:13.194141 containerd[1707]: time="2025-01-14T13:23:13.194098773Z" level=error msg="Failed to destroy network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.194478 containerd[1707]: time="2025-01-14T13:23:13.194420287Z" level=error msg="encountered an error cleaning up failed sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.194564 containerd[1707]: time="2025-01-14T13:23:13.194526092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.195296 kubelet[2520]: E0114 13:23:13.194768 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.195296 kubelet[2520]: E0114 13:23:13.194826 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:13.195296 kubelet[2520]: E0114 13:23:13.194850 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:13.195500 kubelet[2520]: E0114 13:23:13.194897 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:13.811610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213-shm.mount: Deactivated successfully. Jan 14 13:23:13.821837 kubelet[2520]: E0114 13:23:13.821800 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:13.982591 kubelet[2520]: I0114 13:23:13.982557 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213" Jan 14 13:23:13.983499 containerd[1707]: time="2025-01-14T13:23:13.983454977Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:13.983934 containerd[1707]: time="2025-01-14T13:23:13.983701887Z" level=info msg="Ensure that sandbox ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213 in task-service has been cleanup successfully" Jan 14 13:23:13.987638 containerd[1707]: time="2025-01-14T13:23:13.986381601Z" level=info msg="TearDown network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" successfully" Jan 14 13:23:13.987638 containerd[1707]: time="2025-01-14T13:23:13.986455405Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" returns successfully" Jan 14 13:23:13.987224 systemd[1]: run-netns-cni\x2d9b45f6c0\x2d2ade\x2dc696\x2d38ee\x2d440ade80a2c8.mount: Deactivated successfully. Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988086974Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988183078Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988197379Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988514892Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988598196Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:13.989501 containerd[1707]: time="2025-01-14T13:23:13.988611896Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:13.990365 containerd[1707]: time="2025-01-14T13:23:13.990122361Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:13.990365 containerd[1707]: time="2025-01-14T13:23:13.990302268Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:13.990365 containerd[1707]: time="2025-01-14T13:23:13.990317169Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:13.991298 containerd[1707]: time="2025-01-14T13:23:13.991269910Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:13.991379 containerd[1707]: time="2025-01-14T13:23:13.991350513Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:13.991379 containerd[1707]: time="2025-01-14T13:23:13.991365314Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:13.992049 containerd[1707]: time="2025-01-14T13:23:13.992018941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:5,}" Jan 14 13:23:13.993270 kubelet[2520]: I0114 13:23:13.993243 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb" Jan 14 13:23:13.994185 containerd[1707]: time="2025-01-14T13:23:13.994158933Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:13.994366 containerd[1707]: time="2025-01-14T13:23:13.994342940Z" level=info msg="Ensure that sandbox 07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb in task-service has been cleanup successfully" Jan 14 13:23:13.996380 containerd[1707]: time="2025-01-14T13:23:13.994937366Z" level=info msg="TearDown network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" successfully" Jan 14 13:23:13.996380 containerd[1707]: time="2025-01-14T13:23:13.994962067Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" returns successfully" Jan 14 13:23:13.996172 systemd[1]: run-netns-cni\x2d5e3d778b\x2defbf\x2d0bcc\x2d0672\x2db81e1f400c01.mount: Deactivated successfully. Jan 14 13:23:13.997782 containerd[1707]: time="2025-01-14T13:23:13.997741885Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:13.998500 containerd[1707]: time="2025-01-14T13:23:13.997873391Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:13.998500 containerd[1707]: time="2025-01-14T13:23:13.997890491Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:13.999176 containerd[1707]: time="2025-01-14T13:23:13.998859533Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:13.999176 containerd[1707]: time="2025-01-14T13:23:13.998947436Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:13.999176 containerd[1707]: time="2025-01-14T13:23:13.998996638Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:13.999460 containerd[1707]: time="2025-01-14T13:23:13.999431657Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:14.000139 containerd[1707]: time="2025-01-14T13:23:13.999770971Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:14.000139 containerd[1707]: time="2025-01-14T13:23:13.999803173Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:14.000913 containerd[1707]: time="2025-01-14T13:23:14.000471001Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:14.000913 containerd[1707]: time="2025-01-14T13:23:14.000556205Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:14.000913 containerd[1707]: time="2025-01-14T13:23:14.000570805Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:14.001642 containerd[1707]: time="2025-01-14T13:23:14.001606650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:5,}" Jan 14 13:23:14.186803 containerd[1707]: time="2025-01-14T13:23:14.186656327Z" level=error msg="Failed to destroy network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.187439 containerd[1707]: time="2025-01-14T13:23:14.187243252Z" level=error msg="encountered an error cleaning up failed sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.187439 containerd[1707]: time="2025-01-14T13:23:14.187318955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.188220 kubelet[2520]: E0114 13:23:14.187786 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.188220 kubelet[2520]: E0114 13:23:14.187855 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:14.188220 kubelet[2520]: E0114 13:23:14.187881 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:14.188413 kubelet[2520]: E0114 13:23:14.187927 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:14.192067 containerd[1707]: time="2025-01-14T13:23:14.191818347Z" level=error msg="Failed to destroy network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.192145 containerd[1707]: time="2025-01-14T13:23:14.192107859Z" level=error msg="encountered an error cleaning up failed sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.192203 containerd[1707]: time="2025-01-14T13:23:14.192177762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.193490 kubelet[2520]: E0114 13:23:14.193321 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.193490 kubelet[2520]: E0114 13:23:14.193370 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:14.193490 kubelet[2520]: E0114 13:23:14.193395 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:14.193812 kubelet[2520]: E0114 13:23:14.193444 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:14.812231 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29-shm.mount: Deactivated successfully. Jan 14 13:23:14.822958 kubelet[2520]: E0114 13:23:14.822065 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:14.998316 kubelet[2520]: I0114 13:23:14.998042 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29" Jan 14 13:23:14.999625 containerd[1707]: time="2025-01-14T13:23:14.999214718Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" Jan 14 13:23:15.000723 containerd[1707]: time="2025-01-14T13:23:15.000309865Z" level=info msg="Ensure that sandbox 550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29 in task-service has been cleanup successfully" Jan 14 13:23:15.000723 containerd[1707]: time="2025-01-14T13:23:15.000516674Z" level=info msg="TearDown network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" successfully" Jan 14 13:23:15.000723 containerd[1707]: time="2025-01-14T13:23:15.000536275Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" returns successfully" Jan 14 13:23:15.003179 systemd[1]: run-netns-cni\x2da273b2bf\x2d2854\x2d6707\x2dc8ac\x2d489e78d0fddd.mount: Deactivated successfully. Jan 14 13:23:15.005035 containerd[1707]: time="2025-01-14T13:23:15.004540345Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:15.005035 containerd[1707]: time="2025-01-14T13:23:15.004633749Z" level=info msg="TearDown network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" successfully" Jan 14 13:23:15.005035 containerd[1707]: time="2025-01-14T13:23:15.004648350Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" returns successfully" Jan 14 13:23:15.005549 containerd[1707]: time="2025-01-14T13:23:15.005518687Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:15.005623 containerd[1707]: time="2025-01-14T13:23:15.005608091Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:15.005675 containerd[1707]: time="2025-01-14T13:23:15.005622991Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:15.006480 containerd[1707]: time="2025-01-14T13:23:15.006205316Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:15.006480 containerd[1707]: time="2025-01-14T13:23:15.006296020Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:15.006480 containerd[1707]: time="2025-01-14T13:23:15.006312220Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:15.007048 containerd[1707]: time="2025-01-14T13:23:15.007019551Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:15.007117 containerd[1707]: time="2025-01-14T13:23:15.007101354Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:15.007194 containerd[1707]: time="2025-01-14T13:23:15.007115855Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:15.009627 containerd[1707]: time="2025-01-14T13:23:15.008877530Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:15.009627 containerd[1707]: time="2025-01-14T13:23:15.008958733Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:15.009627 containerd[1707]: time="2025-01-14T13:23:15.008971334Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:15.010091 containerd[1707]: time="2025-01-14T13:23:15.010063580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:6,}" Jan 14 13:23:15.016248 kubelet[2520]: I0114 13:23:15.016217 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8" Jan 14 13:23:15.017205 containerd[1707]: time="2025-01-14T13:23:15.016850669Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" Jan 14 13:23:15.017205 containerd[1707]: time="2025-01-14T13:23:15.017077979Z" level=info msg="Ensure that sandbox 591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8 in task-service has been cleanup successfully" Jan 14 13:23:15.017368 containerd[1707]: time="2025-01-14T13:23:15.017348790Z" level=info msg="TearDown network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" successfully" Jan 14 13:23:15.017436 containerd[1707]: time="2025-01-14T13:23:15.017421093Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" returns successfully" Jan 14 13:23:15.020842 systemd[1]: run-netns-cni\x2d61f59c4e\x2d1cb1\x2d13b1\x2d3263\x2d8a01463161d0.mount: Deactivated successfully. Jan 14 13:23:15.021824 containerd[1707]: time="2025-01-14T13:23:15.021147752Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:15.021824 containerd[1707]: time="2025-01-14T13:23:15.021234556Z" level=info msg="TearDown network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" successfully" Jan 14 13:23:15.021824 containerd[1707]: time="2025-01-14T13:23:15.021253257Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" returns successfully" Jan 14 13:23:15.028250 containerd[1707]: time="2025-01-14T13:23:15.028222053Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:15.028345 containerd[1707]: time="2025-01-14T13:23:15.028308957Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:15.028345 containerd[1707]: time="2025-01-14T13:23:15.028323958Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:15.028748 containerd[1707]: time="2025-01-14T13:23:15.028656272Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:15.029168 containerd[1707]: time="2025-01-14T13:23:15.029041688Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:15.029168 containerd[1707]: time="2025-01-14T13:23:15.029064689Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029390703Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029473907Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029487607Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029853323Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029957427Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.029972928Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:15.030579 containerd[1707]: time="2025-01-14T13:23:15.030363644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:6,}" Jan 14 13:23:15.286379 containerd[1707]: time="2025-01-14T13:23:15.286319141Z" level=error msg="Failed to destroy network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.288353 containerd[1707]: time="2025-01-14T13:23:15.288306625Z" level=error msg="encountered an error cleaning up failed sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.288543 containerd[1707]: time="2025-01-14T13:23:15.288516434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.288926 kubelet[2520]: E0114 13:23:15.288876 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.289028 kubelet[2520]: E0114 13:23:15.288945 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:15.289028 kubelet[2520]: E0114 13:23:15.288972 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zp74" Jan 14 13:23:15.289115 kubelet[2520]: E0114 13:23:15.289044 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zp74_calico-system(eeb30db5-16f4-4252-98a5-62dc3d0af113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zp74" podUID="eeb30db5-16f4-4252-98a5-62dc3d0af113" Jan 14 13:23:15.298066 containerd[1707]: time="2025-01-14T13:23:15.298020539Z" level=error msg="Failed to destroy network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.298439 containerd[1707]: time="2025-01-14T13:23:15.298403055Z" level=error msg="encountered an error cleaning up failed sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.298517 containerd[1707]: time="2025-01-14T13:23:15.298493059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.298790 kubelet[2520]: E0114 13:23:15.298739 2520 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.299247 kubelet[2520]: E0114 13:23:15.298874 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:15.299247 kubelet[2520]: E0114 13:23:15.298903 2520 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-mmtbd" Jan 14 13:23:15.299416 kubelet[2520]: E0114 13:23:15.298981 2520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-mmtbd_default(f2f9ee65-c7a1-4d7b-b082-219bcf7b0367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-mmtbd" podUID="f2f9ee65-c7a1-4d7b-b082-219bcf7b0367" Jan 14 13:23:15.460884 containerd[1707]: time="2025-01-14T13:23:15.460827169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.463436 containerd[1707]: time="2025-01-14T13:23:15.463372678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 14 13:23:15.466179 containerd[1707]: time="2025-01-14T13:23:15.466126695Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.469780 containerd[1707]: time="2025-01-14T13:23:15.469712148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.470668 containerd[1707]: time="2025-01-14T13:23:15.470238070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.531281113s" Jan 14 13:23:15.470668 containerd[1707]: time="2025-01-14T13:23:15.470274772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 14 13:23:15.477792 containerd[1707]: time="2025-01-14T13:23:15.477761290Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:23:15.516017 containerd[1707]: time="2025-01-14T13:23:15.515964017Z" level=info msg="CreateContainer within sandbox \"8da8e988bb71c6a45a27f847bc4a47bd4d0360da1647358963152c770ecd53c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"47d2becbd1e74206bc511ee4323df8acca8b17c556746c2dc5d209ff50a96a3c\"" Jan 14 13:23:15.516711 containerd[1707]: time="2025-01-14T13:23:15.516593343Z" level=info msg="StartContainer for \"47d2becbd1e74206bc511ee4323df8acca8b17c556746c2dc5d209ff50a96a3c\"" Jan 14 13:23:15.540928 systemd[1]: Started cri-containerd-47d2becbd1e74206bc511ee4323df8acca8b17c556746c2dc5d209ff50a96a3c.scope - libcontainer container 47d2becbd1e74206bc511ee4323df8acca8b17c556746c2dc5d209ff50a96a3c. Jan 14 13:23:15.572262 containerd[1707]: time="2025-01-14T13:23:15.572136008Z" level=info msg="StartContainer for \"47d2becbd1e74206bc511ee4323df8acca8b17c556746c2dc5d209ff50a96a3c\" returns successfully" Jan 14 13:23:15.815920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38-shm.mount: Deactivated successfully. Jan 14 13:23:15.816489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409705748.mount: Deactivated successfully. Jan 14 13:23:15.823099 kubelet[2520]: E0114 13:23:15.823024 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:15.860218 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:23:15.860355 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:23:16.022824 kubelet[2520]: I0114 13:23:16.022583 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424" Jan 14 13:23:16.023473 containerd[1707]: time="2025-01-14T13:23:16.023434720Z" level=info msg="StopPodSandbox for \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\"" Jan 14 13:23:16.023916 containerd[1707]: time="2025-01-14T13:23:16.023663230Z" level=info msg="Ensure that sandbox 54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424 in task-service has been cleanup successfully" Jan 14 13:23:16.023975 containerd[1707]: time="2025-01-14T13:23:16.023907740Z" level=info msg="TearDown network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" successfully" Jan 14 13:23:16.023975 containerd[1707]: time="2025-01-14T13:23:16.023927241Z" level=info msg="StopPodSandbox for \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" returns successfully" Jan 14 13:23:16.026181 containerd[1707]: time="2025-01-14T13:23:16.026132435Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" Jan 14 13:23:16.026293 containerd[1707]: time="2025-01-14T13:23:16.026225439Z" level=info msg="TearDown network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" successfully" Jan 14 13:23:16.026293 containerd[1707]: time="2025-01-14T13:23:16.026241139Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" returns successfully" Jan 14 13:23:16.027036 systemd[1]: run-netns-cni\x2d8d710d29\x2de6d3\x2de97b\x2dfcf3\x2dda74cb5e1968.mount: Deactivated successfully. Jan 14 13:23:16.027338 containerd[1707]: time="2025-01-14T13:23:16.027309385Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:16.027425 containerd[1707]: time="2025-01-14T13:23:16.027398689Z" level=info msg="TearDown network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" successfully" Jan 14 13:23:16.027425 containerd[1707]: time="2025-01-14T13:23:16.027413789Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" returns successfully" Jan 14 13:23:16.029950 containerd[1707]: time="2025-01-14T13:23:16.029836793Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:16.030163 containerd[1707]: time="2025-01-14T13:23:16.029932797Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:16.030163 containerd[1707]: time="2025-01-14T13:23:16.030090903Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:16.030822 containerd[1707]: time="2025-01-14T13:23:16.030385416Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:16.030822 containerd[1707]: time="2025-01-14T13:23:16.030474220Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:16.030822 containerd[1707]: time="2025-01-14T13:23:16.030489420Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:16.031100 kubelet[2520]: I0114 13:23:16.031078 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38" Jan 14 13:23:16.032427 containerd[1707]: time="2025-01-14T13:23:16.031911681Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:16.032427 containerd[1707]: time="2025-01-14T13:23:16.032044487Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:16.032427 containerd[1707]: time="2025-01-14T13:23:16.032062387Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:16.032427 containerd[1707]: time="2025-01-14T13:23:16.032054787Z" level=info msg="StopPodSandbox for \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\"" Jan 14 13:23:16.032427 containerd[1707]: time="2025-01-14T13:23:16.032313298Z" level=info msg="Ensure that sandbox c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38 in task-service has been cleanup successfully" Jan 14 13:23:16.032719 containerd[1707]: time="2025-01-14T13:23:16.032695914Z" level=info msg="TearDown network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" successfully" Jan 14 13:23:16.032819 containerd[1707]: time="2025-01-14T13:23:16.032801419Z" level=info msg="StopPodSandbox for \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" returns successfully" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.033917766Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.034008270Z" level=info msg="TearDown network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" successfully" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.034029971Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" returns successfully" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.034010070Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.034129275Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:16.035254 containerd[1707]: time="2025-01-14T13:23:16.034140676Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:16.035724 systemd[1]: run-netns-cni\x2d63daaf5b\x2dcb47\x2dee17\x2d0dc5\x2d4bdc129736c8.mount: Deactivated successfully. Jan 14 13:23:16.036118 containerd[1707]: time="2025-01-14T13:23:16.036083558Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:16.036194 containerd[1707]: time="2025-01-14T13:23:16.036175162Z" level=info msg="TearDown network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" successfully" Jan 14 13:23:16.036242 containerd[1707]: time="2025-01-14T13:23:16.036190463Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" returns successfully" Jan 14 13:23:16.036955 containerd[1707]: time="2025-01-14T13:23:16.036321869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:7,}" Jan 14 13:23:16.038028 containerd[1707]: time="2025-01-14T13:23:16.037897136Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:16.038243 containerd[1707]: time="2025-01-14T13:23:16.038142446Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:16.038243 containerd[1707]: time="2025-01-14T13:23:16.038162047Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:16.039496 containerd[1707]: time="2025-01-14T13:23:16.039340097Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:16.039496 containerd[1707]: time="2025-01-14T13:23:16.039426401Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:16.039496 containerd[1707]: time="2025-01-14T13:23:16.039440501Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:16.039825 containerd[1707]: time="2025-01-14T13:23:16.039779816Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:16.039825 containerd[1707]: time="2025-01-14T13:23:16.039869020Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:16.039825 containerd[1707]: time="2025-01-14T13:23:16.039883820Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:16.040809 containerd[1707]: time="2025-01-14T13:23:16.040145131Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:16.040809 containerd[1707]: time="2025-01-14T13:23:16.040233935Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:16.040809 containerd[1707]: time="2025-01-14T13:23:16.040247836Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:16.043770 containerd[1707]: time="2025-01-14T13:23:16.042842246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:7,}" Jan 14 13:23:16.066625 kubelet[2520]: I0114 13:23:16.066442 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nszk7" podStartSLOduration=4.879665393 podStartE2EDuration="24.06642645s" podCreationTimestamp="2025-01-14 13:22:52 +0000 UTC" firstStartedPulling="2025-01-14 13:22:56.28434625 +0000 UTC m=+4.483862908" lastFinishedPulling="2025-01-14 13:23:15.471107307 +0000 UTC m=+23.670623965" observedRunningTime="2025-01-14 13:23:16.06618094 +0000 UTC m=+24.265697698" watchObservedRunningTime="2025-01-14 13:23:16.06642645 +0000 UTC m=+24.265943208" Jan 14 13:23:16.237667 systemd-networkd[1416]: cali19b2ad71a5f: Link UP Jan 14 13:23:16.238950 systemd-networkd[1416]: cali19b2ad71a5f: Gained carrier Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.142 [INFO][3518] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.153 [INFO][3518] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0 nginx-deployment-85f456d6dd- default f2f9ee65-c7a1-4d7b-b082-219bcf7b0367 1281 0 2025-01-14 13:23:08 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.36 nginx-deployment-85f456d6dd-mmtbd eth0 default [] [] [kns.default ksa.default.default] cali19b2ad71a5f [] []}} ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.153 [INFO][3518] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.185 [INFO][3547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" HandleID="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Workload="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.199 [INFO][3547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" HandleID="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Workload="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292ae0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.36", "pod":"nginx-deployment-85f456d6dd-mmtbd", "timestamp":"2025-01-14 13:23:16.18576193 +0000 UTC"}, Hostname:"10.200.4.36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.199 [INFO][3547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.199 [INFO][3547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.199 [INFO][3547] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.36' Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.201 [INFO][3547] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.205 [INFO][3547] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.209 [INFO][3547] ipam/ipam.go 489: Trying affinity for 192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.211 [INFO][3547] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.212 [INFO][3547] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.212 [INFO][3547] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.214 [INFO][3547] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5 Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.218 [INFO][3547] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3547] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.1/26] block=192.168.36.0/26 handle="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3547] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.1/26] handle="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" host="10.200.4.36" Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:16.251995 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.1/26] IPv6=[] ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" HandleID="k8s-pod-network.62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Workload="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.229 [INFO][3518] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"f2f9ee65-c7a1-4d7b-b082-219bcf7b0367", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-mmtbd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19b2ad71a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.230 [INFO][3518] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.1/32] ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.230 [INFO][3518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19b2ad71a5f ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.239 [INFO][3518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.240 [INFO][3518] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"f2f9ee65-c7a1-4d7b-b082-219bcf7b0367", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5", Pod:"nginx-deployment-85f456d6dd-mmtbd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali19b2ad71a5f", MAC:"26:80:e4:93:74:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.253253 containerd[1707]: 2025-01-14 13:23:16.250 [INFO][3518] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5" Namespace="default" Pod="nginx-deployment-85f456d6dd-mmtbd" WorkloadEndpoint="10.200.4.36-k8s-nginx--deployment--85f456d6dd--mmtbd-eth0" Jan 14 13:23:16.267948 systemd-networkd[1416]: cali53c81de8379: Link UP Jan 14 13:23:16.268864 systemd-networkd[1416]: cali53c81de8379: Gained carrier Jan 14 13:23:16.285157 containerd[1707]: time="2025-01-14T13:23:16.285062258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:16.285286 containerd[1707]: time="2025-01-14T13:23:16.285142561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:16.285286 containerd[1707]: time="2025-01-14T13:23:16.285162862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.285286 containerd[1707]: time="2025-01-14T13:23:16.285258166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.149 [INFO][3528] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.161 [INFO][3528] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.36-k8s-csi--node--driver--9zp74-eth0 csi-node-driver- calico-system eeb30db5-16f4-4252-98a5-62dc3d0af113 1204 0 2025-01-14 13:22:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.36 csi-node-driver-9zp74 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53c81de8379 [] []}} ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.162 [INFO][3528] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.201 [INFO][3552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" HandleID="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Workload="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.210 [INFO][3552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" HandleID="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Workload="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.36", "pod":"csi-node-driver-9zp74", "timestamp":"2025-01-14 13:23:16.201654807 +0000 UTC"}, Hostname:"10.200.4.36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.210 [INFO][3552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.227 [INFO][3552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.36' Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.229 [INFO][3552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.232 [INFO][3552] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.237 [INFO][3552] ipam/ipam.go 489: Trying affinity for 192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.240 [INFO][3552] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.242 [INFO][3552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.242 [INFO][3552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.244 [INFO][3552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.253 [INFO][3552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.262 [INFO][3552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.2/26] block=192.168.36.0/26 handle="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.262 [INFO][3552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.2/26] handle="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" host="10.200.4.36" Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.262 [INFO][3552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:16.286556 containerd[1707]: 2025-01-14 13:23:16.262 [INFO][3552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.2/26] IPv6=[] ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" HandleID="k8s-pod-network.8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Workload="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.263 [INFO][3528] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-csi--node--driver--9zp74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeb30db5-16f4-4252-98a5-62dc3d0af113", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"", Pod:"csi-node-driver-9zp74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53c81de8379", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.263 [INFO][3528] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.2/32] ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.263 [INFO][3528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53c81de8379 ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.269 [INFO][3528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.269 [INFO][3528] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-csi--node--driver--9zp74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeb30db5-16f4-4252-98a5-62dc3d0af113", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac", Pod:"csi-node-driver-9zp74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53c81de8379", MAC:"d6:b8:4b:2e:4b:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.287488 containerd[1707]: 2025-01-14 13:23:16.284 [INFO][3528] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac" Namespace="calico-system" Pod="csi-node-driver-9zp74" WorkloadEndpoint="10.200.4.36-k8s-csi--node--driver--9zp74-eth0" Jan 14 13:23:16.309911 systemd[1]: Started cri-containerd-62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5.scope - libcontainer container 62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5. Jan 14 13:23:16.330204 containerd[1707]: time="2025-01-14T13:23:16.329969469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:16.330204 containerd[1707]: time="2025-01-14T13:23:16.330083874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:16.331769 containerd[1707]: time="2025-01-14T13:23:16.330717101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.331769 containerd[1707]: time="2025-01-14T13:23:16.331613639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.355941 systemd[1]: Started cri-containerd-8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac.scope - libcontainer container 8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac. Jan 14 13:23:16.373581 containerd[1707]: time="2025-01-14T13:23:16.373439820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-mmtbd,Uid:f2f9ee65-c7a1-4d7b-b082-219bcf7b0367,Namespace:default,Attempt:7,} returns sandbox id \"62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5\"" Jan 14 13:23:16.376618 containerd[1707]: time="2025-01-14T13:23:16.376561053Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:23:16.388372 containerd[1707]: time="2025-01-14T13:23:16.388348755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zp74,Uid:eeb30db5-16f4-4252-98a5-62dc3d0af113,Namespace:calico-system,Attempt:7,} returns sandbox id \"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac\"" Jan 14 13:23:16.823309 kubelet[2520]: E0114 13:23:16.823233 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:17.424822 kernel: bpftool[3777]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 14 13:23:17.456967 systemd-networkd[1416]: cali53c81de8379: Gained IPv6LL Jan 14 13:23:17.780187 systemd-networkd[1416]: cali19b2ad71a5f: Gained IPv6LL Jan 14 13:23:17.788126 systemd-networkd[1416]: vxlan.calico: Link UP Jan 14 13:23:17.788135 systemd-networkd[1416]: vxlan.calico: Gained carrier Jan 14 13:23:17.823920 kubelet[2520]: E0114 13:23:17.823854 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:18.824230 kubelet[2520]: E0114 13:23:18.824181 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:19.485312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786873752.mount: Deactivated successfully. Jan 14 13:23:19.824933 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Jan 14 13:23:19.826275 kubelet[2520]: E0114 13:23:19.825287 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:19.896964 kubelet[2520]: I0114 13:23:19.896926 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:23:20.825952 kubelet[2520]: E0114 13:23:20.825901 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:20.889903 containerd[1707]: time="2025-01-14T13:23:20.889067750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:20.892131 containerd[1707]: time="2025-01-14T13:23:20.892076878Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 14 13:23:20.896431 containerd[1707]: time="2025-01-14T13:23:20.896382062Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:20.903835 containerd[1707]: time="2025-01-14T13:23:20.903799977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:20.904831 containerd[1707]: time="2025-01-14T13:23:20.904690015Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.528086661s" Jan 14 13:23:20.904831 containerd[1707]: time="2025-01-14T13:23:20.904723817Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:23:20.906359 containerd[1707]: time="2025-01-14T13:23:20.906165678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 14 13:23:20.906871 containerd[1707]: time="2025-01-14T13:23:20.906843307Z" level=info msg="CreateContainer within sandbox \"62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 14 13:23:20.955868 containerd[1707]: time="2025-01-14T13:23:20.955823492Z" level=info msg="CreateContainer within sandbox \"62443b013d04451e731e9f9d86f42ab1adf264fb9cd6a281c0855a342097edd5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"66be243d7bc7f6e1277a8fa5101049e6bbb656370ab544c6687667a4cbdd9289\"" Jan 14 13:23:20.956480 containerd[1707]: time="2025-01-14T13:23:20.956375715Z" level=info msg="StartContainer for \"66be243d7bc7f6e1277a8fa5101049e6bbb656370ab544c6687667a4cbdd9289\"" Jan 14 13:23:20.985904 systemd[1]: Started cri-containerd-66be243d7bc7f6e1277a8fa5101049e6bbb656370ab544c6687667a4cbdd9289.scope - libcontainer container 66be243d7bc7f6e1277a8fa5101049e6bbb656370ab544c6687667a4cbdd9289. Jan 14 13:23:21.012603 containerd[1707]: time="2025-01-14T13:23:21.012475003Z" level=info msg="StartContainer for \"66be243d7bc7f6e1277a8fa5101049e6bbb656370ab544c6687667a4cbdd9289\" returns successfully" Jan 14 13:23:21.080371 kubelet[2520]: I0114 13:23:21.080215 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-mmtbd" podStartSLOduration=8.550317949 podStartE2EDuration="13.080196786s" podCreationTimestamp="2025-01-14 13:23:08 +0000 UTC" firstStartedPulling="2025-01-14 13:23:16.375830822 +0000 UTC m=+24.575347480" lastFinishedPulling="2025-01-14 13:23:20.905709559 +0000 UTC m=+29.105226317" observedRunningTime="2025-01-14 13:23:21.080090782 +0000 UTC m=+29.279607540" watchObservedRunningTime="2025-01-14 13:23:21.080196786 +0000 UTC m=+29.279713544" Jan 14 13:23:21.826549 kubelet[2520]: E0114 13:23:21.826443 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:22.335242 containerd[1707]: time="2025-01-14T13:23:22.335183910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:22.338259 containerd[1707]: time="2025-01-14T13:23:22.338196938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 14 13:23:22.341862 containerd[1707]: time="2025-01-14T13:23:22.341807792Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:22.346590 containerd[1707]: time="2025-01-14T13:23:22.346539393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:22.347314 containerd[1707]: time="2025-01-14T13:23:22.347153820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.44095364s" Jan 14 13:23:22.347314 containerd[1707]: time="2025-01-14T13:23:22.347190921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 14 13:23:22.349581 containerd[1707]: time="2025-01-14T13:23:22.349554022Z" level=info msg="CreateContainer within sandbox \"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 14 13:23:22.397607 containerd[1707]: time="2025-01-14T13:23:22.397551565Z" level=info msg="CreateContainer within sandbox \"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"72943a7e63bde9028480f27e22661355b85d82ee7c30ad72c88e6d68f2573b35\"" Jan 14 13:23:22.398224 containerd[1707]: time="2025-01-14T13:23:22.398135790Z" level=info msg="StartContainer for \"72943a7e63bde9028480f27e22661355b85d82ee7c30ad72c88e6d68f2573b35\"" Jan 14 13:23:22.433896 systemd[1]: Started cri-containerd-72943a7e63bde9028480f27e22661355b85d82ee7c30ad72c88e6d68f2573b35.scope - libcontainer container 72943a7e63bde9028480f27e22661355b85d82ee7c30ad72c88e6d68f2573b35. Jan 14 13:23:22.463351 containerd[1707]: time="2025-01-14T13:23:22.463307564Z" level=info msg="StartContainer for \"72943a7e63bde9028480f27e22661355b85d82ee7c30ad72c88e6d68f2573b35\" returns successfully" Jan 14 13:23:22.464574 containerd[1707]: time="2025-01-14T13:23:22.464527716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 14 13:23:22.826761 kubelet[2520]: E0114 13:23:22.826638 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:23.827111 kubelet[2520]: E0114 13:23:23.827060 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:24.576541 containerd[1707]: time="2025-01-14T13:23:24.576485220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:24.579493 containerd[1707]: time="2025-01-14T13:23:24.579427846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 14 13:23:24.582268 containerd[1707]: time="2025-01-14T13:23:24.582203464Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:24.587313 containerd[1707]: time="2025-01-14T13:23:24.587261779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:24.588329 containerd[1707]: time="2025-01-14T13:23:24.587858704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.123295187s" Jan 14 13:23:24.588329 containerd[1707]: time="2025-01-14T13:23:24.587895706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 14 13:23:24.590081 containerd[1707]: time="2025-01-14T13:23:24.590053798Z" level=info msg="CreateContainer within sandbox \"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 14 13:23:24.641285 containerd[1707]: time="2025-01-14T13:23:24.641240677Z" level=info msg="CreateContainer within sandbox \"8c626278249b505015c48fe369743bd2f9ca24cb22c9f6793d0def956d112fac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f709867ffaa2e4220fa8c2eddfcc9ac35ade32d2fda4064d54ee23c95ae837c3\"" Jan 14 13:23:24.641946 containerd[1707]: time="2025-01-14T13:23:24.641917206Z" level=info msg="StartContainer for \"f709867ffaa2e4220fa8c2eddfcc9ac35ade32d2fda4064d54ee23c95ae837c3\"" Jan 14 13:23:24.675891 systemd[1]: Started cri-containerd-f709867ffaa2e4220fa8c2eddfcc9ac35ade32d2fda4064d54ee23c95ae837c3.scope - libcontainer container f709867ffaa2e4220fa8c2eddfcc9ac35ade32d2fda4064d54ee23c95ae837c3. Jan 14 13:23:24.708461 containerd[1707]: time="2025-01-14T13:23:24.708409936Z" level=info msg="StartContainer for \"f709867ffaa2e4220fa8c2eddfcc9ac35ade32d2fda4064d54ee23c95ae837c3\" returns successfully" Jan 14 13:23:24.827424 kubelet[2520]: E0114 13:23:24.827264 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:24.908377 kubelet[2520]: I0114 13:23:24.908338 2520 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 14 13:23:24.908377 kubelet[2520]: I0114 13:23:24.908376 2520 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 14 13:23:25.100952 kubelet[2520]: I0114 13:23:25.100808 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9zp74" podStartSLOduration=24.901377594 podStartE2EDuration="33.10079114s" podCreationTimestamp="2025-01-14 13:22:52 +0000 UTC" firstStartedPulling="2025-01-14 13:23:16.389341497 +0000 UTC m=+24.588858255" lastFinishedPulling="2025-01-14 13:23:24.588755043 +0000 UTC m=+32.788271801" observedRunningTime="2025-01-14 13:23:25.100610032 +0000 UTC m=+33.300126690" watchObservedRunningTime="2025-01-14 13:23:25.10079114 +0000 UTC m=+33.300307798" Jan 14 13:23:25.828217 kubelet[2520]: E0114 13:23:25.828143 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:26.829184 kubelet[2520]: E0114 13:23:26.829124 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:27.291649 kubelet[2520]: I0114 13:23:27.291609 2520 topology_manager.go:215] "Topology Admit Handler" podUID="dc6adcc9-d364-44bf-bdd3-d717efc4f31f" podNamespace="default" podName="nfs-server-provisioner-0" Jan 14 13:23:27.298294 systemd[1]: Created slice kubepods-besteffort-poddc6adcc9_d364_44bf_bdd3_d717efc4f31f.slice - libcontainer container kubepods-besteffort-poddc6adcc9_d364_44bf_bdd3_d717efc4f31f.slice. Jan 14 13:23:27.439834 kubelet[2520]: I0114 13:23:27.439770 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/dc6adcc9-d364-44bf-bdd3-d717efc4f31f-data\") pod \"nfs-server-provisioner-0\" (UID: \"dc6adcc9-d364-44bf-bdd3-d717efc4f31f\") " pod="default/nfs-server-provisioner-0" Jan 14 13:23:27.439834 kubelet[2520]: I0114 13:23:27.439833 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgsmv\" (UniqueName: \"kubernetes.io/projected/dc6adcc9-d364-44bf-bdd3-d717efc4f31f-kube-api-access-bgsmv\") pod \"nfs-server-provisioner-0\" (UID: \"dc6adcc9-d364-44bf-bdd3-d717efc4f31f\") " pod="default/nfs-server-provisioner-0" Jan 14 13:23:27.601982 containerd[1707]: time="2025-01-14T13:23:27.601860912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc6adcc9-d364-44bf-bdd3-d717efc4f31f,Namespace:default,Attempt:0,}" Jan 14 13:23:27.745156 systemd-networkd[1416]: cali60e51b789ff: Link UP Jan 14 13:23:27.746612 systemd-networkd[1416]: cali60e51b789ff: Gained carrier Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.675 [INFO][4070] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.36-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default dc6adcc9-d364-44bf-bdd3-d717efc4f31f 1403 0 2025-01-14 13:23:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.36 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.675 [INFO][4070] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.699 [INFO][4080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" HandleID="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Workload="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.709 [INFO][4080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" HandleID="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Workload="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.36", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-14 13:23:27.699668476 +0000 UTC"}, Hostname:"10.200.4.36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.709 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.709 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.709 [INFO][4080] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.36' Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.712 [INFO][4080] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.717 [INFO][4080] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.721 [INFO][4080] ipam/ipam.go 489: Trying affinity for 192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.722 [INFO][4080] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.724 [INFO][4080] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.724 [INFO][4080] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.726 [INFO][4080] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9 Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.730 [INFO][4080] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.741 [INFO][4080] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.3/26] block=192.168.36.0/26 handle="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.741 [INFO][4080] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.3/26] handle="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" host="10.200.4.36" Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.741 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:27.770677 containerd[1707]: 2025-01-14 13:23:27.741 [INFO][4080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.3/26] IPv6=[] ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" HandleID="k8s-pod-network.3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Workload="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.771692 containerd[1707]: 2025-01-14 13:23:27.742 [INFO][4070] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"dc6adcc9-d364-44bf-bdd3-d717efc4f31f", ResourceVersion:"1403", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.36.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:27.771692 containerd[1707]: 2025-01-14 13:23:27.742 [INFO][4070] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.3/32] ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.771692 containerd[1707]: 2025-01-14 13:23:27.742 [INFO][4070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.771692 containerd[1707]: 2025-01-14 13:23:27.745 [INFO][4070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.772095 containerd[1707]: 2025-01-14 13:23:27.747 [INFO][4070] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"dc6adcc9-d364-44bf-bdd3-d717efc4f31f", ResourceVersion:"1403", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.36.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"c2:83:9d:cd:d4:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:27.772095 containerd[1707]: 2025-01-14 13:23:27.769 [INFO][4070] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.36-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:27.798616 containerd[1707]: time="2025-01-14T13:23:27.798376078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:27.798616 containerd[1707]: time="2025-01-14T13:23:27.798437381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:27.798616 containerd[1707]: time="2025-01-14T13:23:27.798458582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:27.798616 containerd[1707]: time="2025-01-14T13:23:27.798536685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:27.824912 systemd[1]: Started cri-containerd-3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9.scope - libcontainer container 3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9. Jan 14 13:23:27.829990 kubelet[2520]: E0114 13:23:27.829936 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:27.864796 containerd[1707]: time="2025-01-14T13:23:27.863296142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc6adcc9-d364-44bf-bdd3-d717efc4f31f,Namespace:default,Attempt:0,} returns sandbox id \"3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9\"" Jan 14 13:23:27.866475 containerd[1707]: time="2025-01-14T13:23:27.866436376Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 14 13:23:28.830403 kubelet[2520]: E0114 13:23:28.830353 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:29.361075 systemd-networkd[1416]: cali60e51b789ff: Gained IPv6LL Jan 14 13:23:29.830725 kubelet[2520]: E0114 13:23:29.830676 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:30.831507 kubelet[2520]: E0114 13:23:30.831459 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:31.832352 kubelet[2520]: E0114 13:23:31.832301 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:32.806323 kubelet[2520]: E0114 13:23:32.806275 2520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:32.832663 kubelet[2520]: E0114 13:23:32.832608 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:33.833557 kubelet[2520]: E0114 13:23:33.833491 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:34.127677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401568224.mount: Deactivated successfully. Jan 14 13:23:34.834202 kubelet[2520]: E0114 13:23:34.833942 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:35.834470 kubelet[2520]: E0114 13:23:35.834397 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:36.128267 containerd[1707]: time="2025-01-14T13:23:36.128121179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:36.132217 containerd[1707]: time="2025-01-14T13:23:36.132152621Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 14 13:23:36.137339 containerd[1707]: time="2025-01-14T13:23:36.137279674Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:36.143537 containerd[1707]: time="2025-01-14T13:23:36.143492338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:36.144402 containerd[1707]: time="2025-01-14T13:23:36.144239046Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.277750968s" Jan 14 13:23:36.144402 containerd[1707]: time="2025-01-14T13:23:36.144275546Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 14 13:23:36.146963 containerd[1707]: time="2025-01-14T13:23:36.146934974Z" level=info msg="CreateContainer within sandbox \"3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 14 13:23:36.192409 containerd[1707]: time="2025-01-14T13:23:36.192354744Z" level=info msg="CreateContainer within sandbox \"3f9d924ef55340c110ce977c419efe785b1875465243bbc0b4aa253149ff0ab9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042\"" Jan 14 13:23:36.193101 containerd[1707]: time="2025-01-14T13:23:36.192949250Z" level=info msg="StartContainer for \"59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042\"" Jan 14 13:23:36.224404 systemd[1]: run-containerd-runc-k8s.io-59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042-runc.sM73OT.mount: Deactivated successfully. Jan 14 13:23:36.233907 systemd[1]: Started cri-containerd-59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042.scope - libcontainer container 59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042. Jan 14 13:23:36.260134 containerd[1707]: time="2025-01-14T13:23:36.260076844Z" level=info msg="StartContainer for \"59e594052e2d3b1465b34d9205f47d3b932509198c47f72eaebd8a08305e8042\" returns successfully" Jan 14 13:23:36.835349 kubelet[2520]: E0114 13:23:36.835300 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:37.128509 kubelet[2520]: I0114 13:23:37.128321 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.848629511 podStartE2EDuration="10.128305124s" podCreationTimestamp="2025-01-14 13:23:27 +0000 UTC" firstStartedPulling="2025-01-14 13:23:27.865710045 +0000 UTC m=+36.065226803" lastFinishedPulling="2025-01-14 13:23:36.145385758 +0000 UTC m=+44.344902416" observedRunningTime="2025-01-14 13:23:37.128157223 +0000 UTC m=+45.327673981" watchObservedRunningTime="2025-01-14 13:23:37.128305124 +0000 UTC m=+45.327821882" Jan 14 13:23:37.836005 kubelet[2520]: E0114 13:23:37.835949 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:38.837159 kubelet[2520]: E0114 13:23:38.837101 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:39.838285 kubelet[2520]: E0114 13:23:39.838226 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:40.838755 kubelet[2520]: E0114 13:23:40.838698 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:41.839653 kubelet[2520]: E0114 13:23:41.839599 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:42.840455 kubelet[2520]: E0114 13:23:42.840395 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:43.841574 kubelet[2520]: E0114 13:23:43.841527 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:44.842211 kubelet[2520]: E0114 13:23:44.842149 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:45.843129 kubelet[2520]: E0114 13:23:45.843070 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:46.843383 kubelet[2520]: E0114 13:23:46.843322 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:47.844335 kubelet[2520]: E0114 13:23:47.844280 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:48.844599 kubelet[2520]: E0114 13:23:48.844540 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:49.845768 kubelet[2520]: E0114 13:23:49.845601 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:50.846481 kubelet[2520]: E0114 13:23:50.846425 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:51.846960 kubelet[2520]: E0114 13:23:51.846899 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:52.806963 kubelet[2520]: E0114 13:23:52.806881 2520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.845527024Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.845777615Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.845801014Z" level=info msg="StopPodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.846256997Z" level=info msg="RemovePodSandbox for \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.846293096Z" level=info msg="Forcibly stopping sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\"" Jan 14 13:23:52.847275 containerd[1707]: time="2025-01-14T13:23:52.846387992Z" level=info msg="TearDown network for sandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" successfully" Jan 14 13:23:52.851555 kubelet[2520]: E0114 13:23:52.851400 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:52.860116 containerd[1707]: time="2025-01-14T13:23:52.860064689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.860235 containerd[1707]: time="2025-01-14T13:23:52.860129387Z" level=info msg="RemovePodSandbox \"7459e85db6471d76acb616dd7832c4cc9c76a4a7986cd8f242ecf3734db74e0b\" returns successfully" Jan 14 13:23:52.860653 containerd[1707]: time="2025-01-14T13:23:52.860578970Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:52.860809 containerd[1707]: time="2025-01-14T13:23:52.860776963Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:52.860809 containerd[1707]: time="2025-01-14T13:23:52.860802862Z" level=info msg="StopPodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:52.861226 containerd[1707]: time="2025-01-14T13:23:52.861170149Z" level=info msg="RemovePodSandbox for \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:52.861226 containerd[1707]: time="2025-01-14T13:23:52.861204047Z" level=info msg="Forcibly stopping sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\"" Jan 14 13:23:52.861360 containerd[1707]: time="2025-01-14T13:23:52.861300444Z" level=info msg="TearDown network for sandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" successfully" Jan 14 13:23:52.869674 containerd[1707]: time="2025-01-14T13:23:52.869644737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.869852 containerd[1707]: time="2025-01-14T13:23:52.869686736Z" level=info msg="RemovePodSandbox \"7407a7d1aa912a4d1343e8b51de7902b79f67639afc5966bff7969202e527be5\" returns successfully" Jan 14 13:23:52.869995 containerd[1707]: time="2025-01-14T13:23:52.869976125Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:52.870165 containerd[1707]: time="2025-01-14T13:23:52.870066022Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:52.870165 containerd[1707]: time="2025-01-14T13:23:52.870083521Z" level=info msg="StopPodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:52.870372 containerd[1707]: time="2025-01-14T13:23:52.870343711Z" level=info msg="RemovePodSandbox for \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:52.870436 containerd[1707]: time="2025-01-14T13:23:52.870374810Z" level=info msg="Forcibly stopping sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\"" Jan 14 13:23:52.870551 containerd[1707]: time="2025-01-14T13:23:52.870447408Z" level=info msg="TearDown network for sandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" successfully" Jan 14 13:23:52.880070 containerd[1707]: time="2025-01-14T13:23:52.880037655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.880188 containerd[1707]: time="2025-01-14T13:23:52.880082053Z" level=info msg="RemovePodSandbox \"f34074ca57ce98b9b93371c6b5523f439b9ca2318a9a6afa9022ebc7c5a1e703\" returns successfully" Jan 14 13:23:52.880490 containerd[1707]: time="2025-01-14T13:23:52.880401742Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:52.880594 containerd[1707]: time="2025-01-14T13:23:52.880502338Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:52.880594 containerd[1707]: time="2025-01-14T13:23:52.880517037Z" level=info msg="StopPodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:52.880939 containerd[1707]: time="2025-01-14T13:23:52.880913923Z" level=info msg="RemovePodSandbox for \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:52.881012 containerd[1707]: time="2025-01-14T13:23:52.880944122Z" level=info msg="Forcibly stopping sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\"" Jan 14 13:23:52.881085 containerd[1707]: time="2025-01-14T13:23:52.881034618Z" level=info msg="TearDown network for sandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" successfully" Jan 14 13:23:52.891982 containerd[1707]: time="2025-01-14T13:23:52.890632965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.891982 containerd[1707]: time="2025-01-14T13:23:52.890672964Z" level=info msg="RemovePodSandbox \"fc1c5a0cbc7dd895dc0380e6a079f5723409dcd51b267e92af2c4eb7831be5c1\" returns successfully" Jan 14 13:23:52.892200 containerd[1707]: time="2025-01-14T13:23:52.892175309Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:52.892291 containerd[1707]: time="2025-01-14T13:23:52.892271505Z" level=info msg="TearDown network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" successfully" Jan 14 13:23:52.892526 containerd[1707]: time="2025-01-14T13:23:52.892287205Z" level=info msg="StopPodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" returns successfully" Jan 14 13:23:52.892677 containerd[1707]: time="2025-01-14T13:23:52.892651091Z" level=info msg="RemovePodSandbox for \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:52.892759 containerd[1707]: time="2025-01-14T13:23:52.892678790Z" level=info msg="Forcibly stopping sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\"" Jan 14 13:23:52.892813 containerd[1707]: time="2025-01-14T13:23:52.892774387Z" level=info msg="TearDown network for sandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" successfully" Jan 14 13:23:52.900267 containerd[1707]: time="2025-01-14T13:23:52.900089478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.900267 containerd[1707]: time="2025-01-14T13:23:52.900136680Z" level=info msg="RemovePodSandbox \"07975f557ea67159bffd1637267b62f5aabbade90f95632ee192225b4d03b1eb\" returns successfully" Jan 14 13:23:52.901147 containerd[1707]: time="2025-01-14T13:23:52.901114117Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" Jan 14 13:23:52.901233 containerd[1707]: time="2025-01-14T13:23:52.901213121Z" level=info msg="TearDown network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" successfully" Jan 14 13:23:52.901233 containerd[1707]: time="2025-01-14T13:23:52.901228322Z" level=info msg="StopPodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" returns successfully" Jan 14 13:23:52.902303 containerd[1707]: time="2025-01-14T13:23:52.902275862Z" level=info msg="RemovePodSandbox for \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" Jan 14 13:23:52.902373 containerd[1707]: time="2025-01-14T13:23:52.902318264Z" level=info msg="Forcibly stopping sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\"" Jan 14 13:23:52.902440 containerd[1707]: time="2025-01-14T13:23:52.902395366Z" level=info msg="TearDown network for sandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" successfully" Jan 14 13:23:52.927256 containerd[1707]: time="2025-01-14T13:23:52.927214115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.927434 containerd[1707]: time="2025-01-14T13:23:52.927265017Z" level=info msg="RemovePodSandbox \"591b9b29b6d9f4d5dd740e26e26f09f6e14f81834b87baca5938d884099f75f8\" returns successfully" Jan 14 13:23:52.927696 containerd[1707]: time="2025-01-14T13:23:52.927622831Z" level=info msg="StopPodSandbox for \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\"" Jan 14 13:23:52.927818 containerd[1707]: time="2025-01-14T13:23:52.927724735Z" level=info msg="TearDown network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" successfully" Jan 14 13:23:52.927818 containerd[1707]: time="2025-01-14T13:23:52.927761036Z" level=info msg="StopPodSandbox for \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" returns successfully" Jan 14 13:23:52.928131 containerd[1707]: time="2025-01-14T13:23:52.928084349Z" level=info msg="RemovePodSandbox for \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\"" Jan 14 13:23:52.928131 containerd[1707]: time="2025-01-14T13:23:52.928114450Z" level=info msg="Forcibly stopping sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\"" Jan 14 13:23:52.928255 containerd[1707]: time="2025-01-14T13:23:52.928189653Z" level=info msg="TearDown network for sandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" successfully" Jan 14 13:23:52.939597 containerd[1707]: time="2025-01-14T13:23:52.939547387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.939709 containerd[1707]: time="2025-01-14T13:23:52.939602989Z" level=info msg="RemovePodSandbox \"c0c55975f538813ce2f93de83a281ee91e9e6b6e69811ea8416af5df4b4fbe38\" returns successfully" Jan 14 13:23:52.940088 containerd[1707]: time="2025-01-14T13:23:52.940007605Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:52.940190 containerd[1707]: time="2025-01-14T13:23:52.940131609Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:52.940190 containerd[1707]: time="2025-01-14T13:23:52.940153610Z" level=info msg="StopPodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:52.940518 containerd[1707]: time="2025-01-14T13:23:52.940488223Z" level=info msg="RemovePodSandbox for \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:52.940611 containerd[1707]: time="2025-01-14T13:23:52.940528125Z" level=info msg="Forcibly stopping sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\"" Jan 14 13:23:52.940671 containerd[1707]: time="2025-01-14T13:23:52.940619528Z" level=info msg="TearDown network for sandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" successfully" Jan 14 13:23:52.949642 containerd[1707]: time="2025-01-14T13:23:52.949611272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.949773 containerd[1707]: time="2025-01-14T13:23:52.949653774Z" level=info msg="RemovePodSandbox \"8cf5e22da89cb708a885d605a07ef2070fd3b22a0be4ccd7f79b64626e7ce001\" returns successfully" Jan 14 13:23:52.950099 containerd[1707]: time="2025-01-14T13:23:52.950068689Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:52.950209 containerd[1707]: time="2025-01-14T13:23:52.950184294Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:52.950209 containerd[1707]: time="2025-01-14T13:23:52.950204095Z" level=info msg="StopPodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:52.950586 containerd[1707]: time="2025-01-14T13:23:52.950544908Z" level=info msg="RemovePodSandbox for \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:52.950586 containerd[1707]: time="2025-01-14T13:23:52.950576709Z" level=info msg="Forcibly stopping sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\"" Jan 14 13:23:52.950705 containerd[1707]: time="2025-01-14T13:23:52.950651512Z" level=info msg="TearDown network for sandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" successfully" Jan 14 13:23:52.964262 containerd[1707]: time="2025-01-14T13:23:52.964181629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.964262 containerd[1707]: time="2025-01-14T13:23:52.964244031Z" level=info msg="RemovePodSandbox \"7f58105d8b4a364211dd59166f050c72188f8628daa79103bd166bcbeeb29c80\" returns successfully" Jan 14 13:23:52.964841 containerd[1707]: time="2025-01-14T13:23:52.964775952Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:52.965060 containerd[1707]: time="2025-01-14T13:23:52.964937258Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:52.965060 containerd[1707]: time="2025-01-14T13:23:52.964963959Z" level=info msg="StopPodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:52.965749 containerd[1707]: time="2025-01-14T13:23:52.965439977Z" level=info msg="RemovePodSandbox for \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:52.965749 containerd[1707]: time="2025-01-14T13:23:52.965476879Z" level=info msg="Forcibly stopping sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\"" Jan 14 13:23:52.965749 containerd[1707]: time="2025-01-14T13:23:52.965546281Z" level=info msg="TearDown network for sandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" successfully" Jan 14 13:23:52.976417 containerd[1707]: time="2025-01-14T13:23:52.976391896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.976564 containerd[1707]: time="2025-01-14T13:23:52.976433998Z" level=info msg="RemovePodSandbox \"d7edcd6d8d2d047465664b2b56221b98b0385bef6bafa8608941b5611d433691\" returns successfully" Jan 14 13:23:52.976752 containerd[1707]: time="2025-01-14T13:23:52.976710608Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:52.977077 containerd[1707]: time="2025-01-14T13:23:52.976823212Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:52.977077 containerd[1707]: time="2025-01-14T13:23:52.976842713Z" level=info msg="StopPodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:52.977194 containerd[1707]: time="2025-01-14T13:23:52.977103623Z" level=info msg="RemovePodSandbox for \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:52.977194 containerd[1707]: time="2025-01-14T13:23:52.977126624Z" level=info msg="Forcibly stopping sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\"" Jan 14 13:23:52.977280 containerd[1707]: time="2025-01-14T13:23:52.977199427Z" level=info msg="TearDown network for sandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" successfully" Jan 14 13:23:52.990217 containerd[1707]: time="2025-01-14T13:23:52.990182423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.990349 containerd[1707]: time="2025-01-14T13:23:52.990232725Z" level=info msg="RemovePodSandbox \"c681ff4bd97c84805c2980c9c4e60ebf254ee45fe8fca737430c31e8d1cce3bd\" returns successfully" Jan 14 13:23:52.990581 containerd[1707]: time="2025-01-14T13:23:52.990554337Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:52.990718 containerd[1707]: time="2025-01-14T13:23:52.990643841Z" level=info msg="TearDown network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" successfully" Jan 14 13:23:52.990718 containerd[1707]: time="2025-01-14T13:23:52.990666042Z" level=info msg="StopPodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" returns successfully" Jan 14 13:23:52.991048 containerd[1707]: time="2025-01-14T13:23:52.990946552Z" level=info msg="RemovePodSandbox for \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:52.991048 containerd[1707]: time="2025-01-14T13:23:52.990972853Z" level=info msg="Forcibly stopping sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\"" Jan 14 13:23:52.991182 containerd[1707]: time="2025-01-14T13:23:52.991047056Z" level=info msg="TearDown network for sandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" successfully" Jan 14 13:23:52.998908 containerd[1707]: time="2025-01-14T13:23:52.998878756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:52.999003 containerd[1707]: time="2025-01-14T13:23:52.998917857Z" level=info msg="RemovePodSandbox \"ef87c08bb4d1c006e63ce6d19cc072461c2200117f7140a3dfccaca9f565d213\" returns successfully" Jan 14 13:23:52.999338 containerd[1707]: time="2025-01-14T13:23:52.999283571Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" Jan 14 13:23:52.999429 containerd[1707]: time="2025-01-14T13:23:52.999375375Z" level=info msg="TearDown network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" successfully" Jan 14 13:23:52.999429 containerd[1707]: time="2025-01-14T13:23:52.999390675Z" level=info msg="StopPodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" returns successfully" Jan 14 13:23:52.999725 containerd[1707]: time="2025-01-14T13:23:52.999696987Z" level=info msg="RemovePodSandbox for \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" Jan 14 13:23:52.999818 containerd[1707]: time="2025-01-14T13:23:52.999726688Z" level=info msg="Forcibly stopping sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\"" Jan 14 13:23:52.999862 containerd[1707]: time="2025-01-14T13:23:52.999818592Z" level=info msg="TearDown network for sandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" successfully" Jan 14 13:23:53.010626 containerd[1707]: time="2025-01-14T13:23:53.010592604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:53.010786 containerd[1707]: time="2025-01-14T13:23:53.010638705Z" level=info msg="RemovePodSandbox \"550f0dd44f5da73eb9905e050dc203ff796dae33859b9fd4fbe0d4b55cdc3f29\" returns successfully" Jan 14 13:23:53.011103 containerd[1707]: time="2025-01-14T13:23:53.010997419Z" level=info msg="StopPodSandbox for \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\"" Jan 14 13:23:53.011196 containerd[1707]: time="2025-01-14T13:23:53.011103723Z" level=info msg="TearDown network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" successfully" Jan 14 13:23:53.011196 containerd[1707]: time="2025-01-14T13:23:53.011118324Z" level=info msg="StopPodSandbox for \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" returns successfully" Jan 14 13:23:53.011464 containerd[1707]: time="2025-01-14T13:23:53.011428936Z" level=info msg="RemovePodSandbox for \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\"" Jan 14 13:23:53.011531 containerd[1707]: time="2025-01-14T13:23:53.011455637Z" level=info msg="Forcibly stopping sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\"" Jan 14 13:23:53.011670 containerd[1707]: time="2025-01-14T13:23:53.011603342Z" level=info msg="TearDown network for sandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" successfully" Jan 14 13:23:53.018466 containerd[1707]: time="2025-01-14T13:23:53.018439404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:53.018557 containerd[1707]: time="2025-01-14T13:23:53.018479605Z" level=info msg="RemovePodSandbox \"54bdbd2942d6a412a276dc5c24d1ad589111fac9092a7a6e1d17376f00a59424\" returns successfully" Jan 14 13:23:53.851799 kubelet[2520]: E0114 13:23:53.851728 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:54.852862 kubelet[2520]: E0114 13:23:54.852804 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:55.853296 kubelet[2520]: E0114 13:23:55.853238 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:56.853765 kubelet[2520]: E0114 13:23:56.853705 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:57.854882 kubelet[2520]: E0114 13:23:57.854821 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:58.855410 kubelet[2520]: E0114 13:23:58.855344 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:59.855749 kubelet[2520]: E0114 13:23:59.855683 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:00.856286 kubelet[2520]: E0114 13:24:00.856229 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:01.284625 kubelet[2520]: I0114 13:24:01.284574 2520 topology_manager.go:215] "Topology Admit Handler" podUID="28a4fc84-115c-4b9e-bc58-3aa5cb338b54" podNamespace="default" podName="test-pod-1" Jan 14 13:24:01.290425 systemd[1]: Created slice kubepods-besteffort-pod28a4fc84_115c_4b9e_bc58_3aa5cb338b54.slice - libcontainer container kubepods-besteffort-pod28a4fc84_115c_4b9e_bc58_3aa5cb338b54.slice. Jan 14 13:24:01.427262 kubelet[2520]: I0114 13:24:01.427162 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6tq\" (UniqueName: \"kubernetes.io/projected/28a4fc84-115c-4b9e-bc58-3aa5cb338b54-kube-api-access-sx6tq\") pod \"test-pod-1\" (UID: \"28a4fc84-115c-4b9e-bc58-3aa5cb338b54\") " pod="default/test-pod-1" Jan 14 13:24:01.427262 kubelet[2520]: I0114 13:24:01.427225 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb8561ee-f430-42ef-b9c7-c4372f78e973\" (UniqueName: \"kubernetes.io/nfs/28a4fc84-115c-4b9e-bc58-3aa5cb338b54-pvc-bb8561ee-f430-42ef-b9c7-c4372f78e973\") pod \"test-pod-1\" (UID: \"28a4fc84-115c-4b9e-bc58-3aa5cb338b54\") " pod="default/test-pod-1" Jan 14 13:24:01.636768 kernel: FS-Cache: Loaded Jan 14 13:24:01.744684 kernel: RPC: Registered named UNIX socket transport module. Jan 14 13:24:01.744820 kernel: RPC: Registered udp transport module. Jan 14 13:24:01.744843 kernel: RPC: Registered tcp transport module. Jan 14 13:24:01.747722 kernel: RPC: Registered tcp-with-tls transport module. Jan 14 13:24:01.747793 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 14 13:24:01.856998 kubelet[2520]: E0114 13:24:01.856961 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:02.163999 kernel: NFS: Registering the id_resolver key type Jan 14 13:24:02.164140 kernel: Key type id_resolver registered Jan 14 13:24:02.164164 kernel: Key type id_legacy registered Jan 14 13:24:02.280335 nfsidmap[4307]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-d0a677fe50' Jan 14 13:24:02.284970 nfsidmap[4308]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-d0a677fe50' Jan 14 13:24:02.494269 containerd[1707]: time="2025-01-14T13:24:02.493916319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:28a4fc84-115c-4b9e-bc58-3aa5cb338b54,Namespace:default,Attempt:0,}" Jan 14 13:24:02.631647 systemd-networkd[1416]: cali5ec59c6bf6e: Link UP Jan 14 13:24:02.632682 systemd-networkd[1416]: cali5ec59c6bf6e: Gained carrier Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.564 [INFO][4309] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.36-k8s-test--pod--1-eth0 default 28a4fc84-115c-4b9e-bc58-3aa5cb338b54 1511 0 2025-01-14 13:23:28 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.36 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.564 [INFO][4309] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.588 [INFO][4320] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" HandleID="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Workload="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.597 [INFO][4320] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" HandleID="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Workload="10.200.4.36-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.36", "pod":"test-pod-1", "timestamp":"2025-01-14 13:24:02.588151412 +0000 UTC"}, Hostname:"10.200.4.36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.597 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.598 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.598 [INFO][4320] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.36' Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.599 [INFO][4320] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.602 [INFO][4320] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.605 [INFO][4320] ipam/ipam.go 489: Trying affinity for 192.168.36.0/26 host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.607 [INFO][4320] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.609 [INFO][4320] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.609 [INFO][4320] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.610 [INFO][4320] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.614 [INFO][4320] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.626 [INFO][4320] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.4/26] block=192.168.36.0/26 handle="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.626 [INFO][4320] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.4/26] handle="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" host="10.200.4.36" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.626 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.626 [INFO][4320] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.4/26] IPv6=[] ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" HandleID="k8s-pod-network.400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Workload="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.645673 containerd[1707]: 2025-01-14 13:24:02.627 [INFO][4309] cni-plugin/k8s.go 386: Populated endpoint ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"28a4fc84-115c-4b9e-bc58-3aa5cb338b54", ResourceVersion:"1511", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:24:02.646896 containerd[1707]: 2025-01-14 13:24:02.627 [INFO][4309] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.4/32] ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.646896 containerd[1707]: 2025-01-14 13:24:02.627 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.646896 containerd[1707]: 2025-01-14 13:24:02.633 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.646896 containerd[1707]: 2025-01-14 13:24:02.634 [INFO][4309] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.36-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"28a4fc84-115c-4b9e-bc58-3aa5cb338b54", ResourceVersion:"1511", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.36", ContainerID:"400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"3e:40:0d:0e:5e:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:24:02.646896 containerd[1707]: 2025-01-14 13:24:02.644 [INFO][4309] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.36-k8s-test--pod--1-eth0" Jan 14 13:24:02.679399 containerd[1707]: time="2025-01-14T13:24:02.678863989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:02.679399 containerd[1707]: time="2025-01-14T13:24:02.678934592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:02.679399 containerd[1707]: time="2025-01-14T13:24:02.678955892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:02.679399 containerd[1707]: time="2025-01-14T13:24:02.679059796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:02.706987 systemd[1]: Started cri-containerd-400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af.scope - libcontainer container 400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af. Jan 14 13:24:02.748551 containerd[1707]: time="2025-01-14T13:24:02.748438073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:28a4fc84-115c-4b9e-bc58-3aa5cb338b54,Namespace:default,Attempt:0,} returns sandbox id \"400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af\"" Jan 14 13:24:02.751133 containerd[1707]: time="2025-01-14T13:24:02.751089460Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:24:02.858016 kubelet[2520]: E0114 13:24:02.857778 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:03.149125 containerd[1707]: time="2025-01-14T13:24:03.149070821Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:03.152899 containerd[1707]: time="2025-01-14T13:24:03.152834545Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 14 13:24:03.155463 containerd[1707]: time="2025-01-14T13:24:03.155418430Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 404.288668ms" Jan 14 13:24:03.155463 containerd[1707]: time="2025-01-14T13:24:03.155452031Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:24:03.157361 containerd[1707]: time="2025-01-14T13:24:03.157330892Z" level=info msg="CreateContainer within sandbox \"400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 14 13:24:03.201958 containerd[1707]: time="2025-01-14T13:24:03.201912356Z" level=info msg="CreateContainer within sandbox \"400fe5973ababaa4291851249211429cc8af55174db169b81874ac7ef56363af\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a8543b0357432330c5e6b329bad2977552c059ac46062a553c33274ffcb90bc7\"" Jan 14 13:24:03.202561 containerd[1707]: time="2025-01-14T13:24:03.202500275Z" level=info msg="StartContainer for \"a8543b0357432330c5e6b329bad2977552c059ac46062a553c33274ffcb90bc7\"" Jan 14 13:24:03.231899 systemd[1]: Started cri-containerd-a8543b0357432330c5e6b329bad2977552c059ac46062a553c33274ffcb90bc7.scope - libcontainer container a8543b0357432330c5e6b329bad2977552c059ac46062a553c33274ffcb90bc7. Jan 14 13:24:03.265972 containerd[1707]: time="2025-01-14T13:24:03.265933257Z" level=info msg="StartContainer for \"a8543b0357432330c5e6b329bad2977552c059ac46062a553c33274ffcb90bc7\" returns successfully" Jan 14 13:24:03.856899 systemd-networkd[1416]: cali5ec59c6bf6e: Gained IPv6LL Jan 14 13:24:03.858249 kubelet[2520]: E0114 13:24:03.858190 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:04.858823 kubelet[2520]: E0114 13:24:04.858763 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:05.859978 kubelet[2520]: E0114 13:24:05.859919 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:06.861002 kubelet[2520]: E0114 13:24:06.860947 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:07.861665 kubelet[2520]: E0114 13:24:07.861609 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:08.862300 kubelet[2520]: E0114 13:24:08.862237 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:09.863263 kubelet[2520]: E0114 13:24:09.863207 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:10.863663 kubelet[2520]: E0114 13:24:10.863601 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:11.864233 kubelet[2520]: E0114 13:24:11.864173 2520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"