Jan 30 13:06:43.162786 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:06:43.162824 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.162839 kernel: BIOS-provided physical RAM map: Jan 30 13:06:43.162850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:06:43.162860 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:06:43.162870 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:06:43.162882 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:06:43.162893 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:06:43.162907 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:06:43.162919 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:06:43.162929 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:06:43.162939 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:06:43.162950 kernel: NX (Execute Disable) protection: active Jan 30 13:06:43.162960 kernel: APIC: Static calls initialized Jan 30 13:06:43.162977 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:06:43.162990 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jan 30 13:06:43.163001 kernel: random: crng init done Jan 30 13:06:43.163013 kernel: secureboot: Secure boot disabled Jan 30 13:06:43.163026 kernel: SMBIOS 3.1.0 present. Jan 30 13:06:43.163038 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:06:43.163051 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:06:43.163071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:06:43.163082 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:06:43.163093 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:06:43.163108 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:06:43.163120 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:06:43.163132 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:43.163145 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:43.163159 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:06:43.163172 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:06:43.163184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:06:43.163196 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:06:43.163208 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:06:43.163223 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:06:43.163235 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:06:43.163247 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:06:43.163260 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:06:43.163272 kernel: Using GB pages for direct mapping Jan 30 13:06:43.163290 kernel: ACPI: Early table checksum verification disabled Jan 30 13:06:43.163302 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:06:43.163319 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163335 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163349 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:06:43.163363 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:06:43.163378 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163395 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163408 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163424 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163437 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163451 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163473 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163485 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:06:43.163497 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:06:43.163510 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:06:43.163523 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:06:43.163537 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:06:43.163553 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:06:43.163567 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:06:43.163581 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:06:43.163594 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:06:43.163608 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:06:43.163622 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:06:43.163635 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:06:43.163648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:06:43.163676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:06:43.163692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:06:43.163705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:06:43.163718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:06:43.163731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:06:43.163745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:06:43.163758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:06:43.163771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:06:43.163785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:06:43.163801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:06:43.163812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:06:43.163825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:06:43.163838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:06:43.163850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:06:43.163863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:06:43.163877 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:06:43.163890 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:06:43.163903 kernel: Zone ranges: Jan 30 13:06:43.163919 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:06:43.163932 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:06:43.163945 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:43.163957 kernel: Movable zone start for each node Jan 30 13:06:43.163969 kernel: Early memory node ranges Jan 30 13:06:43.163983 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:06:43.163997 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:06:43.164010 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:06:43.164024 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:43.164041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:06:43.164055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:06:43.164069 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:06:43.164083 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:06:43.164097 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:06:43.164110 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:06:43.164124 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:06:43.164137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:06:43.164152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:06:43.164169 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:06:43.164183 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:06:43.164197 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:06:43.164211 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:06:43.164224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:06:43.164238 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:06:43.164253 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:06:43.164266 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:06:43.164280 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:06:43.164297 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:06:43.164311 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:06:43.164326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.164341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:06:43.164355 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:06:43.164369 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:06:43.164383 kernel: Fallback order for Node 0: 0 Jan 30 13:06:43.164396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:06:43.164414 kernel: Policy zone: Normal Jan 30 13:06:43.164439 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:06:43.164454 kernel: software IO TLB: area num 2. Jan 30 13:06:43.164472 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312164K reserved, 0K cma-reserved) Jan 30 13:06:43.164487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:06:43.164502 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:06:43.164516 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:06:43.164530 kernel: Dynamic Preempt: voluntary Jan 30 13:06:43.164545 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:06:43.164561 kernel: rcu: RCU event tracing is enabled. Jan 30 13:06:43.164576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:06:43.164595 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:06:43.164609 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:06:43.164624 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:06:43.164639 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:06:43.164680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:06:43.164696 kernel: Using NULL legacy PIC Jan 30 13:06:43.164714 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:06:43.164728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:06:43.164744 kernel: Console: colour dummy device 80x25 Jan 30 13:06:43.164758 kernel: printk: console [tty1] enabled Jan 30 13:06:43.164773 kernel: printk: console [ttyS0] enabled Jan 30 13:06:43.164787 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:06:43.164801 kernel: ACPI: Core revision 20230628 Jan 30 13:06:43.164816 kernel: Failed to register legacy timer interrupt Jan 30 13:06:43.164830 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:06:43.164849 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:06:43.164864 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:06:43.164878 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:06:43.164893 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:06:43.164908 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:06:43.164922 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:06:43.164937 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:06:43.164952 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:06:43.164967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:06:43.164986 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:06:43.165000 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:06:43.165015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:06:43.165030 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:06:43.165043 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:06:43.165058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:06:43.165073 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:06:43.165087 kernel: RETBleed: Vulnerable Jan 30 13:06:43.165101 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:06:43.165115 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:43.165131 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:43.165146 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:06:43.165161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:06:43.165174 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:06:43.165188 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:06:43.165201 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:06:43.165215 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:06:43.165228 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:06:43.165242 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:06:43.165256 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:06:43.165270 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:06:43.165287 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:06:43.165302 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:06:43.165315 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:06:43.165328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:06:43.165341 kernel: landlock: Up and running. Jan 30 13:06:43.165356 kernel: SELinux: Initializing. Jan 30 13:06:43.165371 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.165385 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.165400 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:06:43.165416 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165431 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165449 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165465 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:06:43.165479 kernel: signal: max sigframe size: 3632 Jan 30 13:06:43.165495 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:06:43.165510 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:06:43.165525 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:06:43.165540 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:06:43.165555 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:06:43.165569 kernel: .... node #0, CPUs: #1 Jan 30 13:06:43.165588 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:06:43.165605 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:06:43.165620 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:06:43.165635 kernel: smpboot: Max logical packages: 1 Jan 30 13:06:43.165650 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:06:43.165690 kernel: devtmpfs: initialized Jan 30 13:06:43.165717 kernel: x86/mm: Memory block size: 128MB Jan 30 13:06:43.165745 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:06:43.165765 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:06:43.165778 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:06:43.165792 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:06:43.165806 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:06:43.165820 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:06:43.165833 kernel: audit: type=2000 audit(1738242401.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:06:43.165844 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:06:43.165858 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:06:43.165871 kernel: cpuidle: using governor menu Jan 30 13:06:43.165888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:06:43.165900 kernel: dca service started, version 1.12.1 Jan 30 13:06:43.165912 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:06:43.165927 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:06:43.165939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:06:43.165955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:06:43.165974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:06:43.165986 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:06:43.165999 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:06:43.166017 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:06:43.166029 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:06:43.166041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:06:43.166055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:06:43.166067 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:06:43.166081 kernel: ACPI: Interpreter enabled Jan 30 13:06:43.166095 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:06:43.166109 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:06:43.166123 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:06:43.166141 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:06:43.166155 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:06:43.166169 kernel: iommu: Default domain type: Translated Jan 30 13:06:43.166183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:06:43.166198 kernel: efivars: Registered efivars operations Jan 30 13:06:43.166212 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:06:43.166226 kernel: PCI: System does not support PCI Jan 30 13:06:43.166239 kernel: vgaarb: loaded Jan 30 13:06:43.166253 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:06:43.166270 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:06:43.166284 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:06:43.166298 kernel: pnp: PnP ACPI init Jan 30 13:06:43.166313 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:06:43.166327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:06:43.166341 kernel: NET: Registered PF_INET protocol family Jan 30 13:06:43.166355 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:06:43.166370 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:06:43.166384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:06:43.166401 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:06:43.166415 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:06:43.166430 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:06:43.166444 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.166458 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.166471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:06:43.166484 kernel: NET: Registered PF_XDP protocol family Jan 30 13:06:43.166498 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:06:43.166511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:06:43.166529 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jan 30 13:06:43.166543 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:06:43.166556 kernel: Initialise system trusted keyrings Jan 30 13:06:43.166570 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:06:43.166583 kernel: Key type asymmetric registered Jan 30 13:06:43.166596 kernel: Asymmetric key parser 'x509' registered Jan 30 13:06:43.166610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:06:43.166623 kernel: io scheduler mq-deadline registered Jan 30 13:06:43.166637 kernel: io scheduler kyber registered Jan 30 13:06:43.166671 kernel: io scheduler bfq registered Jan 30 13:06:43.166685 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:06:43.166698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:06:43.166712 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:06:43.166729 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:06:43.166742 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:06:43.166909 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:06:43.167023 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:06:42 UTC (1738242402) Jan 30 13:06:43.167133 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:06:43.167150 kernel: intel_pstate: CPU model not supported Jan 30 13:06:43.167164 kernel: efifb: probing for efifb Jan 30 13:06:43.167177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:06:43.167191 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:06:43.167204 kernel: efifb: scrolling: redraw Jan 30 13:06:43.167218 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:06:43.167231 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:43.167244 kernel: fb0: EFI VGA frame buffer device Jan 30 13:06:43.167261 kernel: pstore: Using crash dump compression: deflate Jan 30 13:06:43.167274 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:06:43.167288 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:06:43.167301 kernel: Segment Routing with IPv6 Jan 30 13:06:43.167314 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:06:43.167327 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:06:43.167340 kernel: Key type dns_resolver registered Jan 30 13:06:43.167353 kernel: IPI shorthand broadcast: enabled Jan 30 13:06:43.167366 kernel: sched_clock: Marking stable (912003800, 59411700)->(1224542200, -253126700) Jan 30 13:06:43.167383 kernel: registered taskstats version 1 Jan 30 13:06:43.167395 kernel: Loading compiled-in X.509 certificates Jan 30 13:06:43.167408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:06:43.167421 kernel: Key type .fscrypt registered Jan 30 13:06:43.167434 kernel: Key type fscrypt-provisioning registered Jan 30 13:06:43.167447 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:06:43.167460 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:06:43.167487 kernel: ima: No architecture policies found Jan 30 13:06:43.167505 kernel: clk: Disabling unused clocks Jan 30 13:06:43.167517 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:06:43.167529 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:06:43.167543 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:06:43.167557 kernel: Run /init as init process Jan 30 13:06:43.167569 kernel: with arguments: Jan 30 13:06:43.167587 kernel: /init Jan 30 13:06:43.167599 kernel: with environment: Jan 30 13:06:43.167612 kernel: HOME=/ Jan 30 13:06:43.167626 kernel: TERM=linux Jan 30 13:06:43.167644 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:06:43.174248 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:43.174272 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:43.174291 systemd[1]: Detected architecture x86-64. Jan 30 13:06:43.174305 systemd[1]: Running in initrd. Jan 30 13:06:43.174320 systemd[1]: No hostname configured, using default hostname. Jan 30 13:06:43.174334 systemd[1]: Hostname set to . Jan 30 13:06:43.174357 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:43.174372 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:06:43.174388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:43.174403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:43.174419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:06:43.174435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:43.174450 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:06:43.174465 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:06:43.174486 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:06:43.174501 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:06:43.174517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:43.174532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:43.174547 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:06:43.174562 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:43.174578 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:43.174596 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:06:43.174611 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:43.174626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:43.174642 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:06:43.174667 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:06:43.174681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:43.174697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:43.174712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:43.174727 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:06:43.174746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:06:43.174762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:43.174777 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:06:43.174792 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:06:43.174807 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:43.174823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:43.174862 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:06:43.174900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:43.174916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:43.174931 systemd-journald[177]: Journal started Jan 30 13:06:43.174966 systemd-journald[177]: Runtime Journal (/run/log/journal/422c41ca489c4ed586102bce5a489314) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:43.165985 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:06:43.189276 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:43.193720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:43.201389 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:06:43.213620 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:06:43.205663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:43.218776 kernel: Bridge firewalling registered Jan 30 13:06:43.218493 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:06:43.219554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:43.230942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:43.243846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:43.247789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:06:43.249133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:43.268040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:43.278236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:43.285080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:43.292336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:43.301897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:06:43.309807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:43.316321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:43.325183 dracut-cmdline[208]: dracut-dracut-053 Jan 30 13:06:43.327725 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.354602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:43.387627 systemd-resolved[213]: Positive Trust Anchors: Jan 30 13:06:43.387646 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:06:43.387715 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:06:43.414301 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 30 13:06:43.415512 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:06:43.418565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:43.434671 kernel: SCSI subsystem initialized Jan 30 13:06:43.444669 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:06:43.455672 kernel: iscsi: registered transport (tcp) Jan 30 13:06:43.477235 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:06:43.477291 kernel: QLogic iSCSI HBA Driver Jan 30 13:06:43.512690 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:43.520797 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:06:43.548692 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:06:43.549612 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:06:43.554940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:06:43.595676 kernel: raid6: avx512x4 gen() 16703 MB/s Jan 30 13:06:43.614674 kernel: raid6: avx512x2 gen() 18224 MB/s Jan 30 13:06:43.633667 kernel: raid6: avx512x1 gen() 17497 MB/s Jan 30 13:06:43.652670 kernel: raid6: avx2x4 gen() 16066 MB/s Jan 30 13:06:43.671670 kernel: raid6: avx2x2 gen() 17970 MB/s Jan 30 13:06:43.692038 kernel: raid6: avx2x1 gen() 13417 MB/s Jan 30 13:06:43.692110 kernel: raid6: using algorithm avx512x2 gen() 18224 MB/s Jan 30 13:06:43.712598 kernel: raid6: .... xor() 25993 MB/s, rmw enabled Jan 30 13:06:43.712692 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:06:43.735692 kernel: xor: automatically using best checksumming function avx Jan 30 13:06:43.885685 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:06:43.895185 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:43.907883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:43.928561 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:06:43.937077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:43.951375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:06:43.966907 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jan 30 13:06:43.994455 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:44.003766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:44.045702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:44.056850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:06:44.078321 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:44.086498 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:44.095994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:44.106708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:44.119876 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:06:44.136369 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:06:44.153159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:44.170239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:44.170370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:44.188441 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:06:44.188473 kernel: AES CTR mode by8 optimization enabled Jan 30 13:06:44.188491 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:06:44.188739 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:44.195001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:44.195229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.200254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.212969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.236715 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:06:44.250538 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:06:44.250595 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:06:44.250615 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:06:44.250964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:44.251096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.265086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.274019 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:06:44.282106 kernel: PTP clock support registered Jan 30 13:06:44.282156 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:06:44.286673 kernel: scsi host0: storvsc_host_t Jan 30 13:06:44.286923 kernel: scsi host1: storvsc_host_t Jan 30 13:06:44.291934 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:06:44.303678 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:06:44.307867 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:06:44.307927 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:06:44.311482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.319303 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:06:44.319373 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:06:44.320688 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:06:44.751795 systemd-resolved[213]: Clock change detected. Flushing caches. Jan 30 13:06:44.760042 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:06:44.765577 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:06:44.771295 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:06:44.770169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:44.779106 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:06:44.787460 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:06:44.790055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:06:44.790079 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:06:44.806354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:44.825660 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:06:44.846078 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: VF slot 1 added Jan 30 13:06:44.846271 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:06:44.846437 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:06:44.846593 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:06:44.846747 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:06:44.846914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:44.846933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:06:44.856346 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:06:44.862634 kernel: hv_pci d3aff04e-b85a-4cd8-8bda-603eff4db421: PCI VMBus probing: Using version 0x10004 Jan 30 13:06:44.906234 kernel: hv_pci d3aff04e-b85a-4cd8-8bda-603eff4db421: PCI host bridge to bus b85a:00 Jan 30 13:06:44.906432 kernel: pci_bus b85a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:06:44.906609 kernel: pci_bus b85a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:06:44.907556 kernel: pci b85a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:06:44.907747 kernel: pci b85a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:44.907924 kernel: pci b85a:00:02.0: enabling Extended Tags Jan 30 13:06:44.908121 kernel: pci b85a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b85a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:06:44.908362 kernel: pci_bus b85a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:06:44.908557 kernel: pci b85a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:45.067058 kernel: mlx5_core b85a:00:02.0: enabling device (0000 -> 0002) Jan 30 13:06:45.300225 kernel: mlx5_core b85a:00:02.0: firmware version: 14.30.5000 Jan 30 13:06:45.300439 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: VF registering: eth1 Jan 30 13:06:45.300604 kernel: mlx5_core b85a:00:02.0 eth1: joined to eth0 Jan 30 13:06:45.301063 kernel: mlx5_core b85a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:06:45.309213 kernel: mlx5_core b85a:00:02.0 enP47194s1: renamed from eth1 Jan 30 13:06:45.349501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:06:45.431378 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (452) Jan 30 13:06:45.451946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:06:45.458063 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:06:45.468893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (468) Jan 30 13:06:45.478194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:06:45.489547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:45.497259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:06:45.511465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:45.517016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:46.524978 disk-uuid[605]: The operation has completed successfully. Jan 30 13:06:46.532166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:46.607401 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:06:46.607530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:06:46.636224 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:06:46.644023 sh[691]: Success Jan 30 13:06:46.675017 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:06:46.875766 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:06:46.889116 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:06:46.894051 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:06:46.913417 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:06:46.913566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:46.919417 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:06:46.922453 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:06:46.925225 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:06:47.340911 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:06:47.344415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:06:47.356208 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:06:47.361148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:06:47.384265 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:47.384309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:47.384329 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:47.405019 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:47.414854 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:06:47.421181 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:47.426437 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:06:47.438182 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:06:47.459747 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:47.472126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:47.491504 systemd-networkd[875]: lo: Link UP Jan 30 13:06:47.491514 systemd-networkd[875]: lo: Gained carrier Jan 30 13:06:47.494205 systemd-networkd[875]: Enumeration completed Jan 30 13:06:47.494445 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:06:47.496620 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:47.496623 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:47.500469 systemd[1]: Reached target network.target - Network. Jan 30 13:06:47.567013 kernel: mlx5_core b85a:00:02.0 enP47194s1: Link up Jan 30 13:06:47.597015 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: Data path switched to VF: enP47194s1 Jan 30 13:06:47.597337 systemd-networkd[875]: enP47194s1: Link UP Jan 30 13:06:47.600156 systemd-networkd[875]: eth0: Link UP Jan 30 13:06:47.601454 systemd-networkd[875]: eth0: Gained carrier Jan 30 13:06:47.601465 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:47.605180 systemd-networkd[875]: enP47194s1: Gained carrier Jan 30 13:06:47.647049 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.27/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:48.330921 ignition[838]: Ignition 2.20.0 Jan 30 13:06:48.330933 ignition[838]: Stage: fetch-offline Jan 30 13:06:48.330974 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.330984 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.331121 ignition[838]: parsed url from cmdline: "" Jan 30 13:06:48.331126 ignition[838]: no config URL provided Jan 30 13:06:48.331133 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:48.331143 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:48.331154 ignition[838]: failed to fetch config: resource requires networking Jan 30 13:06:48.333162 ignition[838]: Ignition finished successfully Jan 30 13:06:48.353230 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:48.360281 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:06:48.376598 ignition[884]: Ignition 2.20.0 Jan 30 13:06:48.376610 ignition[884]: Stage: fetch Jan 30 13:06:48.376811 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.376824 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.376945 ignition[884]: parsed url from cmdline: "" Jan 30 13:06:48.376948 ignition[884]: no config URL provided Jan 30 13:06:48.376953 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:48.376961 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:48.376984 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:06:48.452377 ignition[884]: GET result: OK Jan 30 13:06:48.452496 ignition[884]: config has been read from IMDS userdata Jan 30 13:06:48.452520 ignition[884]: parsing config with SHA512: 267094d5b35872679a63173082bd870208c0beb113eb346f95d97b4470425657d6786b4a3fc385f2b0c140e2fbe4aa7e8d4571eca91ca8a494d1084f39a2aae4 Jan 30 13:06:48.459838 unknown[884]: fetched base config from "system" Jan 30 13:06:48.459852 unknown[884]: fetched base config from "system" Jan 30 13:06:48.460214 ignition[884]: fetch: fetch complete Jan 30 13:06:48.459862 unknown[884]: fetched user config from "azure" Jan 30 13:06:48.460219 ignition[884]: fetch: fetch passed Jan 30 13:06:48.461770 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:06:48.460266 ignition[884]: Ignition finished successfully Jan 30 13:06:48.477330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:06:48.492068 ignition[890]: Ignition 2.20.0 Jan 30 13:06:48.492080 ignition[890]: Stage: kargs Jan 30 13:06:48.492286 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.492299 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.495706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:06:48.492944 ignition[890]: kargs: kargs passed Jan 30 13:06:48.492988 ignition[890]: Ignition finished successfully Jan 30 13:06:48.509766 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:06:48.524072 ignition[897]: Ignition 2.20.0 Jan 30 13:06:48.524083 ignition[897]: Stage: disks Jan 30 13:06:48.525850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:06:48.524300 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.524314 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.524959 ignition[897]: disks: disks passed Jan 30 13:06:48.539015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:48.525053 ignition[897]: Ignition finished successfully Jan 30 13:06:48.544010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:06:48.547017 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:48.551008 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:06:48.551884 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:06:48.560313 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:06:48.618937 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:06:48.627756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:06:48.638074 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:06:48.730014 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:06:48.730554 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:06:48.733412 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:48.734700 systemd-networkd[875]: eth0: Gained IPv6LL Jan 30 13:06:48.775136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:48.781073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:06:48.791009 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jan 30 13:06:48.791214 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:06:48.799519 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:06:48.812483 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:48.812517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:48.812543 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:48.801527 systemd-networkd[875]: enP47194s1: Gained IPv6LL Jan 30 13:06:48.816352 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:48.802008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:48.823919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:48.826245 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:06:48.839177 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:06:49.448407 coreos-metadata[918]: Jan 30 13:06:49.448 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:06:49.453861 coreos-metadata[918]: Jan 30 13:06:49.453 INFO Fetch successful Jan 30 13:06:49.456865 coreos-metadata[918]: Jan 30 13:06:49.454 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:06:49.463177 coreos-metadata[918]: Jan 30 13:06:49.463 INFO Fetch successful Jan 30 13:06:49.477160 coreos-metadata[918]: Jan 30 13:06:49.477 INFO wrote hostname ci-4186.1.0-a-551420da85 to /sysroot/etc/hostname Jan 30 13:06:49.481855 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:49.488922 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:06:49.509797 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:06:49.517139 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:06:49.524418 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:06:50.310070 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:50.322140 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:06:50.329170 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:06:50.337016 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:50.337534 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:06:50.365095 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:06:50.369489 ignition[1037]: INFO : Ignition 2.20.0 Jan 30 13:06:50.369489 ignition[1037]: INFO : Stage: mount Jan 30 13:06:50.369489 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:50.369489 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:50.369489 ignition[1037]: INFO : mount: mount passed Jan 30 13:06:50.369489 ignition[1037]: INFO : Ignition finished successfully Jan 30 13:06:50.380089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:06:50.393148 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:06:50.399939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:50.427027 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Jan 30 13:06:50.431008 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:50.431045 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:50.435576 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:50.440007 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:50.441919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:50.476123 ignition[1065]: INFO : Ignition 2.20.0 Jan 30 13:06:50.476123 ignition[1065]: INFO : Stage: files Jan 30 13:06:50.480378 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:50.480378 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:50.480378 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:06:50.480378 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:06:50.480378 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:06:50.585818 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:06:50.592361 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:06:50.592361 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:06:50.589108 unknown[1065]: wrote ssh authorized keys file for user: core Jan 30 13:06:50.618671 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:50.625138 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:50.639547 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:06:51.168576 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: files passed Jan 30 13:06:51.379507 ignition[1065]: INFO : Ignition finished successfully Jan 30 13:06:51.400079 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:06:51.407179 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:06:51.414007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:06:51.420081 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:06:51.420196 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:06:51.433877 initrd-setup-root-after-ignition[1093]: grep: Jan 30 13:06:51.433877 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.445775 initrd-setup-root-after-ignition[1093]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.445775 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.434929 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:51.437585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:06:51.449303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:06:51.478083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:06:51.478203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:06:51.488763 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:06:51.491379 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:06:51.498836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:06:51.507216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:06:51.521383 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:51.531175 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:06:51.543939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:51.545078 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:51.545483 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:06:51.545885 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:06:51.545986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:51.546692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:06:51.547562 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:06:51.547977 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:06:51.548398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:51.548803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:51.549370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:06:51.549765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:51.550199 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:06:51.550592 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:06:51.551232 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:06:51.551608 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:06:51.551734 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:51.552448 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:51.552890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:51.657257 ignition[1117]: INFO : Ignition 2.20.0 Jan 30 13:06:51.657257 ignition[1117]: INFO : Stage: umount Jan 30 13:06:51.657257 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:51.657257 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:51.657257 ignition[1117]: INFO : umount: umount passed Jan 30 13:06:51.657257 ignition[1117]: INFO : Ignition finished successfully Jan 30 13:06:51.553262 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:06:51.590223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:51.596145 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:06:51.596309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:51.601498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:06:51.601658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:51.606225 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:06:51.606369 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:06:51.611386 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:06:51.611534 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:51.627249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:06:51.635121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:06:51.635463 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:51.644550 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:06:51.649614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:06:51.649798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:51.662361 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:06:51.662501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:51.670693 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:06:51.672082 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:06:51.679394 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:06:51.679495 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:06:51.684633 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:06:51.684679 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:06:51.694717 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:06:51.694770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:06:51.759440 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:06:51.759537 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:06:51.764316 systemd[1]: Stopped target network.target - Network. Jan 30 13:06:51.769254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:06:51.769335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:51.778029 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:06:51.782408 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:06:51.788341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:51.792261 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:06:51.801207 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:06:51.805702 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:06:51.805763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:51.810059 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:06:51.810106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:51.814791 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:06:51.814848 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:06:51.819221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:06:51.819276 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:51.828385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:06:51.837151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:06:51.848032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:06:51.849043 systemd-networkd[875]: eth0: DHCPv6 lease lost Jan 30 13:06:51.852272 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:06:51.854538 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:06:51.860250 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:06:51.860333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:51.874168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:06:51.876516 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:06:51.876610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:51.879828 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:51.881288 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:06:51.881424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:06:51.891938 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:06:51.892060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:51.908571 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:06:51.908625 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:51.913292 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:06:51.913343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:51.931691 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:06:51.931873 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:51.941283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:06:51.943787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:51.944683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:06:51.944715 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:51.945087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:06:51.945128 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:51.946113 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:06:51.946149 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:51.946910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:51.946946 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:52.002213 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: Data path switched from VF: enP47194s1 Jan 30 13:06:51.965082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:06:51.973144 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:06:51.973209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:51.979867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:51.979914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:51.995478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:06:51.995707 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:06:52.031364 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:06:52.031589 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:06:52.581580 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:06:52.581739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:06:52.585128 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:06:52.592782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:06:52.592854 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:52.609139 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:06:52.618197 systemd[1]: Switching root. Jan 30 13:06:52.683172 systemd-journald[177]: Journal stopped Jan 30 13:06:43.162786 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:06:43.162824 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.162839 kernel: BIOS-provided physical RAM map: Jan 30 13:06:43.162850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:06:43.162860 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:06:43.162870 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:06:43.162882 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:06:43.162893 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:06:43.162907 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:06:43.162919 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:06:43.162929 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:06:43.162939 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:06:43.162950 kernel: NX (Execute Disable) protection: active Jan 30 13:06:43.162960 kernel: APIC: Static calls initialized Jan 30 13:06:43.162977 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:06:43.162990 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jan 30 13:06:43.163001 kernel: random: crng init done Jan 30 13:06:43.163013 kernel: secureboot: Secure boot disabled Jan 30 13:06:43.163026 kernel: SMBIOS 3.1.0 present. Jan 30 13:06:43.163038 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:06:43.163051 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:06:43.163071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:06:43.163082 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:06:43.163093 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:06:43.163108 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:06:43.163120 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:06:43.163132 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:43.163145 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:43.163159 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:06:43.163172 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:06:43.163184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:06:43.163196 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:06:43.163208 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:06:43.163223 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:06:43.163235 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:06:43.163247 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:06:43.163260 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:06:43.163272 kernel: Using GB pages for direct mapping Jan 30 13:06:43.163290 kernel: ACPI: Early table checksum verification disabled Jan 30 13:06:43.163302 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:06:43.163319 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163335 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163349 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:06:43.163363 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:06:43.163378 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163395 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163408 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163424 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163437 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163451 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163473 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:43.163485 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:06:43.163497 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:06:43.163510 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:06:43.163523 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:06:43.163537 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:06:43.163553 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:06:43.163567 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:06:43.163581 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:06:43.163594 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:06:43.163608 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:06:43.163622 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:06:43.163635 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:06:43.163648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:06:43.163676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:06:43.163692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:06:43.163705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:06:43.163718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:06:43.163731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:06:43.163745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:06:43.163758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:06:43.163771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:06:43.163785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:06:43.163801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:06:43.163812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:06:43.163825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:06:43.163838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:06:43.163850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:06:43.163863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:06:43.163877 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:06:43.163890 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:06:43.163903 kernel: Zone ranges: Jan 30 13:06:43.163919 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:06:43.163932 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:06:43.163945 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:43.163957 kernel: Movable zone start for each node Jan 30 13:06:43.163969 kernel: Early memory node ranges Jan 30 13:06:43.163983 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:06:43.163997 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:06:43.164010 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:06:43.164024 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:43.164041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:06:43.164055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:06:43.164069 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:06:43.164083 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:06:43.164097 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:06:43.164110 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:06:43.164124 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:06:43.164137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:06:43.164152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:06:43.164169 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:06:43.164183 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:06:43.164197 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:06:43.164211 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:06:43.164224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:06:43.164238 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:06:43.164253 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:06:43.164266 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:06:43.164280 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:06:43.164297 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:06:43.164311 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:06:43.164326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.164341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:06:43.164355 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:06:43.164369 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:06:43.164383 kernel: Fallback order for Node 0: 0 Jan 30 13:06:43.164396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:06:43.164414 kernel: Policy zone: Normal Jan 30 13:06:43.164439 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:06:43.164454 kernel: software IO TLB: area num 2. Jan 30 13:06:43.164472 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312164K reserved, 0K cma-reserved) Jan 30 13:06:43.164487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:06:43.164502 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:06:43.164516 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:06:43.164530 kernel: Dynamic Preempt: voluntary Jan 30 13:06:43.164545 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:06:43.164561 kernel: rcu: RCU event tracing is enabled. Jan 30 13:06:43.164576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:06:43.164595 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:06:43.164609 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:06:43.164624 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:06:43.164639 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:06:43.164680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:06:43.164696 kernel: Using NULL legacy PIC Jan 30 13:06:43.164714 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:06:43.164728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:06:43.164744 kernel: Console: colour dummy device 80x25 Jan 30 13:06:43.164758 kernel: printk: console [tty1] enabled Jan 30 13:06:43.164773 kernel: printk: console [ttyS0] enabled Jan 30 13:06:43.164787 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:06:43.164801 kernel: ACPI: Core revision 20230628 Jan 30 13:06:43.164816 kernel: Failed to register legacy timer interrupt Jan 30 13:06:43.164830 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:06:43.164849 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:06:43.164864 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:06:43.164878 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:06:43.164893 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:06:43.164908 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:06:43.164922 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:06:43.164937 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:06:43.164952 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:06:43.164967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:06:43.164986 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:06:43.165000 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:06:43.165015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:06:43.165030 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:06:43.165043 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:06:43.165058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:06:43.165073 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:06:43.165087 kernel: RETBleed: Vulnerable Jan 30 13:06:43.165101 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:06:43.165115 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:43.165131 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:43.165146 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:06:43.165161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:06:43.165174 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:06:43.165188 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:06:43.165201 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:06:43.165215 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:06:43.165228 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:06:43.165242 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:06:43.165256 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:06:43.165270 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:06:43.165287 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:06:43.165302 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:06:43.165315 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:06:43.165328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:06:43.165341 kernel: landlock: Up and running. Jan 30 13:06:43.165356 kernel: SELinux: Initializing. Jan 30 13:06:43.165371 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.165385 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.165400 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:06:43.165416 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165431 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165449 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:43.165465 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:06:43.165479 kernel: signal: max sigframe size: 3632 Jan 30 13:06:43.165495 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:06:43.165510 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:06:43.165525 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:06:43.165540 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:06:43.165555 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:06:43.165569 kernel: .... node #0, CPUs: #1 Jan 30 13:06:43.165588 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:06:43.165605 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:06:43.165620 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:06:43.165635 kernel: smpboot: Max logical packages: 1 Jan 30 13:06:43.165650 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:06:43.165690 kernel: devtmpfs: initialized Jan 30 13:06:43.165717 kernel: x86/mm: Memory block size: 128MB Jan 30 13:06:43.165745 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:06:43.165765 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:06:43.165778 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:06:43.165792 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:06:43.165806 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:06:43.165820 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:06:43.165833 kernel: audit: type=2000 audit(1738242401.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:06:43.165844 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:06:43.165858 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:06:43.165871 kernel: cpuidle: using governor menu Jan 30 13:06:43.165888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:06:43.165900 kernel: dca service started, version 1.12.1 Jan 30 13:06:43.165912 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:06:43.165927 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:06:43.165939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:06:43.165955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:06:43.165974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:06:43.165986 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:06:43.165999 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:06:43.166017 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:06:43.166029 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:06:43.166041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:06:43.166055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:06:43.166067 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:06:43.166081 kernel: ACPI: Interpreter enabled Jan 30 13:06:43.166095 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:06:43.166109 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:06:43.166123 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:06:43.166141 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:06:43.166155 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:06:43.166169 kernel: iommu: Default domain type: Translated Jan 30 13:06:43.166183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:06:43.166198 kernel: efivars: Registered efivars operations Jan 30 13:06:43.166212 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:06:43.166226 kernel: PCI: System does not support PCI Jan 30 13:06:43.166239 kernel: vgaarb: loaded Jan 30 13:06:43.166253 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:06:43.166270 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:06:43.166284 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:06:43.166298 kernel: pnp: PnP ACPI init Jan 30 13:06:43.166313 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:06:43.166327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:06:43.166341 kernel: NET: Registered PF_INET protocol family Jan 30 13:06:43.166355 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:06:43.166370 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:06:43.166384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:06:43.166401 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:06:43.166415 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:06:43.166430 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:06:43.166444 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.166458 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:43.166471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:06:43.166484 kernel: NET: Registered PF_XDP protocol family Jan 30 13:06:43.166498 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:06:43.166511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:06:43.166529 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jan 30 13:06:43.166543 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:06:43.166556 kernel: Initialise system trusted keyrings Jan 30 13:06:43.166570 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:06:43.166583 kernel: Key type asymmetric registered Jan 30 13:06:43.166596 kernel: Asymmetric key parser 'x509' registered Jan 30 13:06:43.166610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:06:43.166623 kernel: io scheduler mq-deadline registered Jan 30 13:06:43.166637 kernel: io scheduler kyber registered Jan 30 13:06:43.166671 kernel: io scheduler bfq registered Jan 30 13:06:43.166685 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:06:43.166698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:06:43.166712 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:06:43.166729 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:06:43.166742 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:06:43.166909 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:06:43.167023 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:06:42 UTC (1738242402) Jan 30 13:06:43.167133 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:06:43.167150 kernel: intel_pstate: CPU model not supported Jan 30 13:06:43.167164 kernel: efifb: probing for efifb Jan 30 13:06:43.167177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:06:43.167191 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:06:43.167204 kernel: efifb: scrolling: redraw Jan 30 13:06:43.167218 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:06:43.167231 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:43.167244 kernel: fb0: EFI VGA frame buffer device Jan 30 13:06:43.167261 kernel: pstore: Using crash dump compression: deflate Jan 30 13:06:43.167274 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:06:43.167288 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:06:43.167301 kernel: Segment Routing with IPv6 Jan 30 13:06:43.167314 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:06:43.167327 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:06:43.167340 kernel: Key type dns_resolver registered Jan 30 13:06:43.167353 kernel: IPI shorthand broadcast: enabled Jan 30 13:06:43.167366 kernel: sched_clock: Marking stable (912003800, 59411700)->(1224542200, -253126700) Jan 30 13:06:43.167383 kernel: registered taskstats version 1 Jan 30 13:06:43.167395 kernel: Loading compiled-in X.509 certificates Jan 30 13:06:43.167408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:06:43.167421 kernel: Key type .fscrypt registered Jan 30 13:06:43.167434 kernel: Key type fscrypt-provisioning registered Jan 30 13:06:43.167447 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:06:43.167460 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:06:43.167487 kernel: ima: No architecture policies found Jan 30 13:06:43.167505 kernel: clk: Disabling unused clocks Jan 30 13:06:43.167517 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:06:43.167529 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:06:43.167543 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:06:43.167557 kernel: Run /init as init process Jan 30 13:06:43.167569 kernel: with arguments: Jan 30 13:06:43.167587 kernel: /init Jan 30 13:06:43.167599 kernel: with environment: Jan 30 13:06:43.167612 kernel: HOME=/ Jan 30 13:06:43.167626 kernel: TERM=linux Jan 30 13:06:43.167644 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:06:43.174248 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:43.174272 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:43.174291 systemd[1]: Detected architecture x86-64. Jan 30 13:06:43.174305 systemd[1]: Running in initrd. Jan 30 13:06:43.174320 systemd[1]: No hostname configured, using default hostname. Jan 30 13:06:43.174334 systemd[1]: Hostname set to . Jan 30 13:06:43.174357 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:43.174372 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:06:43.174388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:43.174403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:43.174419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:06:43.174435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:43.174450 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:06:43.174465 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:06:43.174486 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:06:43.174501 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:06:43.174517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:43.174532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:43.174547 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:06:43.174562 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:43.174578 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:43.174596 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:06:43.174611 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:43.174626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:43.174642 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:06:43.174667 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:06:43.174681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:43.174697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:43.174712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:43.174727 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:06:43.174746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:06:43.174762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:43.174777 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:06:43.174792 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:06:43.174807 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:43.174823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:43.174862 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:06:43.174900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:43.174916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:43.174931 systemd-journald[177]: Journal started Jan 30 13:06:43.174966 systemd-journald[177]: Runtime Journal (/run/log/journal/422c41ca489c4ed586102bce5a489314) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:43.165985 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:06:43.189276 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:43.193720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:43.201389 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:06:43.213620 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:06:43.205663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:43.218776 kernel: Bridge firewalling registered Jan 30 13:06:43.218493 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:06:43.219554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:43.230942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:43.243846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:43.247789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:06:43.249133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:43.268040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:43.278236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:43.285080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:43.292336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:43.301897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:06:43.309807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:43.316321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:43.325183 dracut-cmdline[208]: dracut-dracut-053 Jan 30 13:06:43.327725 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:43.354602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:43.387627 systemd-resolved[213]: Positive Trust Anchors: Jan 30 13:06:43.387646 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:06:43.387715 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:06:43.414301 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 30 13:06:43.415512 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:06:43.418565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:43.434671 kernel: SCSI subsystem initialized Jan 30 13:06:43.444669 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:06:43.455672 kernel: iscsi: registered transport (tcp) Jan 30 13:06:43.477235 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:06:43.477291 kernel: QLogic iSCSI HBA Driver Jan 30 13:06:43.512690 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:43.520797 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:06:43.548692 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:06:43.549612 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:06:43.554940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:06:43.595676 kernel: raid6: avx512x4 gen() 16703 MB/s Jan 30 13:06:43.614674 kernel: raid6: avx512x2 gen() 18224 MB/s Jan 30 13:06:43.633667 kernel: raid6: avx512x1 gen() 17497 MB/s Jan 30 13:06:43.652670 kernel: raid6: avx2x4 gen() 16066 MB/s Jan 30 13:06:43.671670 kernel: raid6: avx2x2 gen() 17970 MB/s Jan 30 13:06:43.692038 kernel: raid6: avx2x1 gen() 13417 MB/s Jan 30 13:06:43.692110 kernel: raid6: using algorithm avx512x2 gen() 18224 MB/s Jan 30 13:06:43.712598 kernel: raid6: .... xor() 25993 MB/s, rmw enabled Jan 30 13:06:43.712692 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:06:43.735692 kernel: xor: automatically using best checksumming function avx Jan 30 13:06:43.885685 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:06:43.895185 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:43.907883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:43.928561 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:06:43.937077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:43.951375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:06:43.966907 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jan 30 13:06:43.994455 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:44.003766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:44.045702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:44.056850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:06:44.078321 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:44.086498 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:44.095994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:44.106708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:44.119876 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:06:44.136369 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:06:44.153159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:44.170239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:44.170370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:44.188441 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:06:44.188473 kernel: AES CTR mode by8 optimization enabled Jan 30 13:06:44.188491 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:06:44.188739 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:44.195001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:44.195229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.200254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.212969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.236715 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:06:44.250538 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:06:44.250595 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:06:44.250615 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:06:44.250964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:44.251096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.265086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:44.274019 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:06:44.282106 kernel: PTP clock support registered Jan 30 13:06:44.282156 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:06:44.286673 kernel: scsi host0: storvsc_host_t Jan 30 13:06:44.286923 kernel: scsi host1: storvsc_host_t Jan 30 13:06:44.291934 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:06:44.303678 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:06:44.307867 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:06:44.307927 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:06:44.311482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:44.319303 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:06:44.319373 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:06:44.320688 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:06:44.751795 systemd-resolved[213]: Clock change detected. Flushing caches. Jan 30 13:06:44.760042 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:06:44.765577 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:06:44.771295 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:06:44.770169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:44.779106 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:06:44.787460 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:06:44.790055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:06:44.790079 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:06:44.806354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:44.825660 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:06:44.846078 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: VF slot 1 added Jan 30 13:06:44.846271 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:06:44.846437 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:06:44.846593 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:06:44.846747 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:06:44.846914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:44.846933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:06:44.856346 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:06:44.862634 kernel: hv_pci d3aff04e-b85a-4cd8-8bda-603eff4db421: PCI VMBus probing: Using version 0x10004 Jan 30 13:06:44.906234 kernel: hv_pci d3aff04e-b85a-4cd8-8bda-603eff4db421: PCI host bridge to bus b85a:00 Jan 30 13:06:44.906432 kernel: pci_bus b85a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:06:44.906609 kernel: pci_bus b85a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:06:44.907556 kernel: pci b85a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:06:44.907747 kernel: pci b85a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:44.907924 kernel: pci b85a:00:02.0: enabling Extended Tags Jan 30 13:06:44.908121 kernel: pci b85a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b85a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:06:44.908362 kernel: pci_bus b85a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:06:44.908557 kernel: pci b85a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:45.067058 kernel: mlx5_core b85a:00:02.0: enabling device (0000 -> 0002) Jan 30 13:06:45.300225 kernel: mlx5_core b85a:00:02.0: firmware version: 14.30.5000 Jan 30 13:06:45.300439 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: VF registering: eth1 Jan 30 13:06:45.300604 kernel: mlx5_core b85a:00:02.0 eth1: joined to eth0 Jan 30 13:06:45.301063 kernel: mlx5_core b85a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:06:45.309213 kernel: mlx5_core b85a:00:02.0 enP47194s1: renamed from eth1 Jan 30 13:06:45.349501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:06:45.431378 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (452) Jan 30 13:06:45.451946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:06:45.458063 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:06:45.468893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (468) Jan 30 13:06:45.478194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:06:45.489547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:45.497259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:06:45.511465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:45.517016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:46.524978 disk-uuid[605]: The operation has completed successfully. Jan 30 13:06:46.532166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:46.607401 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:06:46.607530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:06:46.636224 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:06:46.644023 sh[691]: Success Jan 30 13:06:46.675017 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:06:46.875766 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:06:46.889116 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:06:46.894051 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:06:46.913417 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:06:46.913566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:46.919417 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:06:46.922453 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:06:46.925225 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:06:47.340911 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:06:47.344415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:06:47.356208 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:06:47.361148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:06:47.384265 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:47.384309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:47.384329 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:47.405019 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:47.414854 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:06:47.421181 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:47.426437 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:06:47.438182 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:06:47.459747 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:47.472126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:47.491504 systemd-networkd[875]: lo: Link UP Jan 30 13:06:47.491514 systemd-networkd[875]: lo: Gained carrier Jan 30 13:06:47.494205 systemd-networkd[875]: Enumeration completed Jan 30 13:06:47.494445 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:06:47.496620 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:47.496623 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:47.500469 systemd[1]: Reached target network.target - Network. Jan 30 13:06:47.567013 kernel: mlx5_core b85a:00:02.0 enP47194s1: Link up Jan 30 13:06:47.597015 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: Data path switched to VF: enP47194s1 Jan 30 13:06:47.597337 systemd-networkd[875]: enP47194s1: Link UP Jan 30 13:06:47.600156 systemd-networkd[875]: eth0: Link UP Jan 30 13:06:47.601454 systemd-networkd[875]: eth0: Gained carrier Jan 30 13:06:47.601465 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:47.605180 systemd-networkd[875]: enP47194s1: Gained carrier Jan 30 13:06:47.647049 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.27/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:48.330921 ignition[838]: Ignition 2.20.0 Jan 30 13:06:48.330933 ignition[838]: Stage: fetch-offline Jan 30 13:06:48.330974 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.330984 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.331121 ignition[838]: parsed url from cmdline: "" Jan 30 13:06:48.331126 ignition[838]: no config URL provided Jan 30 13:06:48.331133 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:48.331143 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:48.331154 ignition[838]: failed to fetch config: resource requires networking Jan 30 13:06:48.333162 ignition[838]: Ignition finished successfully Jan 30 13:06:48.353230 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:48.360281 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:06:48.376598 ignition[884]: Ignition 2.20.0 Jan 30 13:06:48.376610 ignition[884]: Stage: fetch Jan 30 13:06:48.376811 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.376824 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.376945 ignition[884]: parsed url from cmdline: "" Jan 30 13:06:48.376948 ignition[884]: no config URL provided Jan 30 13:06:48.376953 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:48.376961 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:48.376984 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:06:48.452377 ignition[884]: GET result: OK Jan 30 13:06:48.452496 ignition[884]: config has been read from IMDS userdata Jan 30 13:06:48.452520 ignition[884]: parsing config with SHA512: 267094d5b35872679a63173082bd870208c0beb113eb346f95d97b4470425657d6786b4a3fc385f2b0c140e2fbe4aa7e8d4571eca91ca8a494d1084f39a2aae4 Jan 30 13:06:48.459838 unknown[884]: fetched base config from "system" Jan 30 13:06:48.459852 unknown[884]: fetched base config from "system" Jan 30 13:06:48.460214 ignition[884]: fetch: fetch complete Jan 30 13:06:48.459862 unknown[884]: fetched user config from "azure" Jan 30 13:06:48.460219 ignition[884]: fetch: fetch passed Jan 30 13:06:48.461770 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:06:48.460266 ignition[884]: Ignition finished successfully Jan 30 13:06:48.477330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:06:48.492068 ignition[890]: Ignition 2.20.0 Jan 30 13:06:48.492080 ignition[890]: Stage: kargs Jan 30 13:06:48.492286 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.492299 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.495706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:06:48.492944 ignition[890]: kargs: kargs passed Jan 30 13:06:48.492988 ignition[890]: Ignition finished successfully Jan 30 13:06:48.509766 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:06:48.524072 ignition[897]: Ignition 2.20.0 Jan 30 13:06:48.524083 ignition[897]: Stage: disks Jan 30 13:06:48.525850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:06:48.524300 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:48.524314 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:48.524959 ignition[897]: disks: disks passed Jan 30 13:06:48.539015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:48.525053 ignition[897]: Ignition finished successfully Jan 30 13:06:48.544010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:06:48.547017 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:48.551008 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:06:48.551884 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:06:48.560313 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:06:48.618937 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:06:48.627756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:06:48.638074 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:06:48.730014 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:06:48.730554 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:06:48.733412 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:48.734700 systemd-networkd[875]: eth0: Gained IPv6LL Jan 30 13:06:48.775136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:48.781073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:06:48.791009 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jan 30 13:06:48.791214 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:06:48.799519 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:06:48.812483 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:48.812517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:48.812543 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:48.801527 systemd-networkd[875]: enP47194s1: Gained IPv6LL Jan 30 13:06:48.816352 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:48.802008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:48.823919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:48.826245 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:06:48.839177 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:06:49.448407 coreos-metadata[918]: Jan 30 13:06:49.448 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:06:49.453861 coreos-metadata[918]: Jan 30 13:06:49.453 INFO Fetch successful Jan 30 13:06:49.456865 coreos-metadata[918]: Jan 30 13:06:49.454 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:06:49.463177 coreos-metadata[918]: Jan 30 13:06:49.463 INFO Fetch successful Jan 30 13:06:49.477160 coreos-metadata[918]: Jan 30 13:06:49.477 INFO wrote hostname ci-4186.1.0-a-551420da85 to /sysroot/etc/hostname Jan 30 13:06:49.481855 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:49.488922 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:06:49.509797 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:06:49.517139 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:06:49.524418 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:06:50.310070 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:50.322140 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:06:50.329170 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:06:50.337016 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:50.337534 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:06:50.365095 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:06:50.369489 ignition[1037]: INFO : Ignition 2.20.0 Jan 30 13:06:50.369489 ignition[1037]: INFO : Stage: mount Jan 30 13:06:50.369489 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:50.369489 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:50.369489 ignition[1037]: INFO : mount: mount passed Jan 30 13:06:50.369489 ignition[1037]: INFO : Ignition finished successfully Jan 30 13:06:50.380089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:06:50.393148 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:06:50.399939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:50.427027 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Jan 30 13:06:50.431008 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:50.431045 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:50.435576 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:50.440007 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:50.441919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:50.476123 ignition[1065]: INFO : Ignition 2.20.0 Jan 30 13:06:50.476123 ignition[1065]: INFO : Stage: files Jan 30 13:06:50.480378 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:50.480378 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:50.480378 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:06:50.480378 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:06:50.480378 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:06:50.585818 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:06:50.592361 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:06:50.592361 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:06:50.589108 unknown[1065]: wrote ssh authorized keys file for user: core Jan 30 13:06:50.618671 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:50.625138 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:50.639547 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:50.644108 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:06:51.168576 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:51.379507 ignition[1065]: INFO : files: files passed Jan 30 13:06:51.379507 ignition[1065]: INFO : Ignition finished successfully Jan 30 13:06:51.400079 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:06:51.407179 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:06:51.414007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:06:51.420081 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:06:51.420196 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:06:51.433877 initrd-setup-root-after-ignition[1093]: grep: Jan 30 13:06:51.433877 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.445775 initrd-setup-root-after-ignition[1093]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.445775 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:51.434929 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:51.437585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:06:51.449303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:06:51.478083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:06:51.478203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:06:51.488763 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:06:51.491379 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:06:51.498836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:06:51.507216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:06:51.521383 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:51.531175 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:06:51.543939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:51.545078 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:51.545483 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:06:51.545885 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:06:51.545986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:51.546692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:06:51.547562 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:06:51.547977 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:06:51.548398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:51.548803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:51.549370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:06:51.549765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:51.550199 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:06:51.550592 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:06:51.551232 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:06:51.551608 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:06:51.551734 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:51.552448 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:51.552890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:51.657257 ignition[1117]: INFO : Ignition 2.20.0 Jan 30 13:06:51.657257 ignition[1117]: INFO : Stage: umount Jan 30 13:06:51.657257 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:51.657257 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:51.657257 ignition[1117]: INFO : umount: umount passed Jan 30 13:06:51.657257 ignition[1117]: INFO : Ignition finished successfully Jan 30 13:06:51.553262 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:06:51.590223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:51.596145 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:06:51.596309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:51.601498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:06:51.601658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:51.606225 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:06:51.606369 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:06:51.611386 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:06:51.611534 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:51.627249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:06:51.635121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:06:51.635463 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:51.644550 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:06:51.649614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:06:51.649798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:51.662361 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:06:51.662501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:51.670693 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:06:51.672082 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:06:51.679394 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:06:51.679495 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:06:51.684633 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:06:51.684679 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:06:51.694717 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:06:51.694770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:06:51.759440 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:06:51.759537 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:06:51.764316 systemd[1]: Stopped target network.target - Network. Jan 30 13:06:51.769254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:06:51.769335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:51.778029 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:06:51.782408 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:06:51.788341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:51.792261 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:06:51.801207 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:06:51.805702 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:06:51.805763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:51.810059 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:06:51.810106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:51.814791 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:06:51.814848 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:06:51.819221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:06:51.819276 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:51.828385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:06:51.837151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:06:51.848032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:06:51.849043 systemd-networkd[875]: eth0: DHCPv6 lease lost Jan 30 13:06:51.852272 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:06:51.854538 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:06:51.860250 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:06:51.860333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:51.874168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:06:51.876516 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:06:51.876610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:51.879828 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:51.881288 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:06:51.881424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:06:51.891938 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:06:51.892060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:51.908571 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:06:51.908625 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:51.913292 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:06:51.913343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:51.931691 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:06:51.931873 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:51.941283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:06:51.943787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:51.944683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:06:51.944715 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:51.945087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:06:51.945128 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:51.946113 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:06:51.946149 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:51.946910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:51.946946 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:52.002213 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: Data path switched from VF: enP47194s1 Jan 30 13:06:51.965082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:06:51.973144 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:06:51.973209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:51.979867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:51.979914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:51.995478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:06:51.995707 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:06:52.031364 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:06:52.031589 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:06:52.581580 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:06:52.581739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:06:52.585128 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:06:52.592782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:06:52.592854 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:52.609139 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:06:52.618197 systemd[1]: Switching root. Jan 30 13:06:52.683172 systemd-journald[177]: Journal stopped Jan 30 13:06:56.907442 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 30 13:06:56.907484 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:06:56.907500 kernel: SELinux: policy capability open_perms=1 Jan 30 13:06:56.907513 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:06:56.907525 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:06:56.907538 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:06:56.907552 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:06:56.907568 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:06:56.907581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:06:56.907594 kernel: audit: type=1403 audit(1738242413.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:06:56.907608 systemd[1]: Successfully loaded SELinux policy in 165.322ms. Jan 30 13:06:56.907624 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.476ms. Jan 30 13:06:56.907639 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:56.907653 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:56.907672 systemd[1]: Detected architecture x86-64. Jan 30 13:06:56.907687 systemd[1]: Detected first boot. Jan 30 13:06:56.907702 systemd[1]: Hostname set to . Jan 30 13:06:56.907717 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:56.907732 zram_generator::config[1162]: No configuration found. Jan 30 13:06:56.907750 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:06:56.907765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:06:56.907780 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:06:56.907794 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:06:56.907810 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:06:56.907825 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:06:56.907841 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:06:56.907859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:06:56.907874 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:06:56.907890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:06:56.907905 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:06:56.907922 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:06:56.907937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:56.907952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:56.907967 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:06:56.907984 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:06:56.908018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:06:56.908035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:56.908050 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:06:56.908065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:56.908080 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:06:56.908100 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:06:56.908115 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:56.908133 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:06:56.908149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:56.908164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:56.908179 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:56.908195 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:56.908210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:06:56.908225 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:06:56.908241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:56.908259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:56.908277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:56.908293 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:06:56.908308 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:06:56.908326 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:06:56.908342 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:06:56.908358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:56.908374 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:06:56.908390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:06:56.908405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:06:56.908421 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:06:56.908437 systemd[1]: Reached target machines.target - Containers. Jan 30 13:06:56.908455 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:06:56.908471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:56.908487 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:56.908503 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:06:56.908518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:06:56.908534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:06:56.908550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:56.908566 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:06:56.908581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:06:56.908600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:06:56.908616 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:06:56.908632 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:06:56.908648 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:06:56.908663 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:06:56.908679 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:56.908694 kernel: loop: module loaded Jan 30 13:06:56.908708 kernel: fuse: init (API version 7.39) Jan 30 13:06:56.908725 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:56.908741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:06:56.908777 systemd-journald[1254]: Collecting audit messages is disabled. Jan 30 13:06:56.908809 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:06:56.908828 systemd-journald[1254]: Journal started Jan 30 13:06:56.908860 systemd-journald[1254]: Runtime Journal (/run/log/journal/71b5ebe81ef948d9961a315c772473dd) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:56.181720 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:06:56.921482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:56.320449 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:06:56.320841 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:06:56.931150 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:06:56.936731 kernel: ACPI: bus type drm_connector registered Jan 30 13:06:56.936775 systemd[1]: Stopped verity-setup.service. Jan 30 13:06:56.948047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:56.956013 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:56.956655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:06:56.959445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:06:56.962543 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:06:56.964947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:06:56.969397 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:06:56.973085 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:06:56.976486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:06:56.981889 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:56.985135 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:06:56.985295 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:06:56.988346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:06:56.988502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:06:56.993742 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:06:56.993900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:06:56.997315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:06:56.997475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:06:57.001030 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:06:57.001207 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:06:57.004197 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:06:57.004457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:06:57.007628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:57.011006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:06:57.014534 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:06:57.034427 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:06:57.044083 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:06:57.056071 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:06:57.059203 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:06:57.059333 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:57.067803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:06:57.077224 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:06:57.082232 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:06:57.085099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:57.100348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:06:57.105382 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:06:57.108321 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:06:57.113196 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:06:57.115989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:06:57.117717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:57.129449 systemd-journald[1254]: Time spent on flushing to /var/log/journal/71b5ebe81ef948d9961a315c772473dd is 27.941ms for 937 entries. Jan 30 13:06:57.129449 systemd-journald[1254]: System Journal (/var/log/journal/71b5ebe81ef948d9961a315c772473dd) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:06:57.190335 systemd-journald[1254]: Received client request to flush runtime journal. Jan 30 13:06:57.123176 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:06:57.133394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:06:57.138354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:57.141581 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:06:57.147134 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:06:57.151385 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:06:57.155491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:06:57.167591 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:06:57.179186 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:06:57.190186 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:06:57.194977 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:06:57.208745 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:06:57.221580 kernel: loop0: detected capacity change from 0 to 141000 Jan 30 13:06:57.242752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:06:57.243874 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:06:57.265325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:57.360224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:06:57.367163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:57.437530 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:06:57.437554 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:06:57.442150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:57.642056 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:06:57.694023 kernel: loop1: detected capacity change from 0 to 28304 Jan 30 13:06:57.994173 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:06:58.050026 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 13:06:58.212039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:06:58.222165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:58.245719 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 30 13:06:58.524027 kernel: loop4: detected capacity change from 0 to 141000 Jan 30 13:06:58.538040 kernel: loop5: detected capacity change from 0 to 28304 Jan 30 13:06:58.547389 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 13:06:58.558246 kernel: loop7: detected capacity change from 0 to 138184 Jan 30 13:06:58.569133 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:06:58.569788 (sd-merge)[1325]: Merged extensions into '/usr'. Jan 30 13:06:58.573636 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:06:58.573653 systemd[1]: Reloading... Jan 30 13:06:58.708025 zram_generator::config[1372]: No configuration found. Jan 30 13:06:58.793128 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:06:58.819480 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:06:58.828025 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:06:58.841411 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:06:58.847008 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:06:58.847097 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:06:58.858950 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:06:58.863045 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:59.019024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1352) Jan 30 13:06:59.173797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:59.360689 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:06:59.362095 systemd[1]: Reloading finished in 787 ms. Jan 30 13:06:59.389498 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 30 13:06:59.410404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:59.415433 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:06:59.448371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:06:59.458913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:59.470445 systemd[1]: Starting ensure-sysext.service... Jan 30 13:06:59.475180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:06:59.481172 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:06:59.488483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:59.495124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:59.503181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:59.514134 systemd[1]: Reloading requested from client PID 1508 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:06:59.514156 systemd[1]: Reloading... Jan 30 13:06:59.525080 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:06:59.525636 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:06:59.527260 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:06:59.527657 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 30 13:06:59.527732 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 30 13:06:59.538417 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:06:59.538429 systemd-tmpfiles[1512]: Skipping /boot Jan 30 13:06:59.551606 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:06:59.551626 systemd-tmpfiles[1512]: Skipping /boot Jan 30 13:06:59.607456 lvm[1509]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:06:59.638043 zram_generator::config[1548]: No configuration found. Jan 30 13:06:59.769302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:59.855140 systemd[1]: Reloading finished in 340 ms. Jan 30 13:06:59.882487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:06:59.886223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:06:59.889859 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:59.899791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:59.901196 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:59.906300 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:06:59.925294 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:06:59.928694 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:59.930040 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:06:59.934085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:06:59.946230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:59.952269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:06:59.959142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:59.964724 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:06:59.969217 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:06:59.980271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:59.990299 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:06:59.997266 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:07:00.000609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:00.004314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:00.016309 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:07:00.024536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:00.024718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:00.028875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:07:00.029214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:07:00.033660 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:07:00.033846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:07:00.050268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:00.050943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:07:00.059347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:07:00.069319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:07:00.081465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:07:00.087511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:07:00.087696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:00.090108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:07:00.099554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:07:00.103777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:00.104397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:00.115459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:07:00.115654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:07:00.122631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:00.127034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:07:00.134349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:07:00.145285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:07:00.148020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:07:00.148296 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:07:00.155616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:00.156846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:07:00.157906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:07:00.162252 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:07:00.166158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:00.167162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:00.170536 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:07:00.170835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:07:00.176698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:07:00.177097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:07:00.183308 systemd[1]: Finished ensure-sysext.service. Jan 30 13:07:00.223283 augenrules[1662]: No rules Jan 30 13:07:00.224912 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:07:00.225151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:07:00.249016 systemd-resolved[1617]: Positive Trust Anchors: Jan 30 13:07:00.249033 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:07:00.249093 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:07:00.282943 systemd-resolved[1617]: Using system hostname 'ci-4186.1.0-a-551420da85'. Jan 30 13:07:00.284352 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:07:00.287557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:07:00.327341 systemd-networkd[1511]: lo: Link UP Jan 30 13:07:00.327351 systemd-networkd[1511]: lo: Gained carrier Jan 30 13:07:00.330256 systemd-networkd[1511]: Enumeration completed Jan 30 13:07:00.330666 systemd-networkd[1511]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:00.330670 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:00.332129 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:07:00.335431 systemd[1]: Reached target network.target - Network. Jan 30 13:07:00.344166 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:07:00.387026 kernel: mlx5_core b85a:00:02.0 enP47194s1: Link up Jan 30 13:07:00.389522 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:07:00.394204 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:07:00.407017 kernel: hv_netvsc 00224840-a3de-0022-4840-a3de00224840 eth0: Data path switched to VF: enP47194s1 Jan 30 13:07:00.407817 systemd-networkd[1511]: enP47194s1: Link UP Jan 30 13:07:00.407982 systemd-networkd[1511]: eth0: Link UP Jan 30 13:07:00.407987 systemd-networkd[1511]: eth0: Gained carrier Jan 30 13:07:00.408526 systemd-networkd[1511]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:00.412423 systemd-networkd[1511]: enP47194s1: Gained carrier Jan 30 13:07:00.440215 systemd-networkd[1511]: eth0: DHCPv4 address 10.200.4.27/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:07:01.661176 systemd-networkd[1511]: eth0: Gained IPv6LL Jan 30 13:07:01.663946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:07:01.668376 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:07:02.301177 systemd-networkd[1511]: enP47194s1: Gained IPv6LL Jan 30 13:07:03.221927 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:07:03.233963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:07:03.243377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:07:03.256606 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:07:03.260466 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:07:03.264198 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:07:03.268123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:07:03.272715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:07:03.276483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:07:03.280663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:07:03.284291 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:07:03.284342 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:07:03.286908 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:07:03.290351 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:07:03.294388 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:07:03.301854 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:07:03.305202 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:07:03.307718 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:07:03.310134 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:07:03.312546 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:07:03.312580 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:07:03.334210 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:07:03.341180 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:07:03.350191 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:07:03.361257 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:07:03.370080 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:07:03.374943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:07:03.378678 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:07:03.378736 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:07:03.382199 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:07:03.387224 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:07:03.397114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:03.399918 (chronyd)[1677]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:07:03.403122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:07:03.410800 jq[1681]: false Jan 30 13:07:03.414184 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:07:03.419731 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:07:03.424292 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:07:03.436191 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:07:03.439186 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:07:03.439826 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:07:03.452514 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:07:03.456833 KVP[1686]: KVP starting; pid is:1686 Jan 30 13:07:03.461158 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:07:03.472744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:07:03.474052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:07:03.475858 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:07:03.476865 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:07:03.482971 extend-filesystems[1682]: Found loop4 Jan 30 13:07:03.487939 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:07:03.486543 KVP[1686]: KVP LIC Version: 3.1 Jan 30 13:07:03.491018 extend-filesystems[1682]: Found loop5 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found loop6 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found loop7 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda1 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda2 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda3 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found usr Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda4 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda6 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda7 Jan 30 13:07:03.494635 extend-filesystems[1682]: Found sda9 Jan 30 13:07:03.494635 extend-filesystems[1682]: Checking size of /dev/sda9 Jan 30 13:07:03.517516 chronyd[1708]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:07:03.536070 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:07:03.536357 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:07:03.555099 chronyd[1708]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:07:03.555396 chronyd[1708]: Loaded seccomp filter (level 2) Jan 30 13:07:03.556946 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:07:03.560656 jq[1697]: true Jan 30 13:07:03.563674 dbus-daemon[1680]: [system] SELinux support is enabled Jan 30 13:07:03.567212 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:07:03.578638 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:07:03.578719 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:07:03.587479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:07:03.587515 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:07:03.591424 (ntainerd)[1715]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:07:03.605166 extend-filesystems[1682]: Old size kept for /dev/sda9 Jan 30 13:07:03.609087 extend-filesystems[1682]: Found sr0 Jan 30 13:07:03.622736 update_engine[1696]: I20250130 13:07:03.622642 1696 main.cc:92] Flatcar Update Engine starting Jan 30 13:07:03.624790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:07:03.625087 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:07:03.632066 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:07:03.651422 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:07:03.654641 jq[1718]: true Jan 30 13:07:03.655517 update_engine[1696]: I20250130 13:07:03.655180 1696 update_check_scheduler.cc:74] Next update check in 4m18s Jan 30 13:07:03.669821 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:07:03.708326 coreos-metadata[1679]: Jan 30 13:07:03.708 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:07:03.716477 coreos-metadata[1679]: Jan 30 13:07:03.716 INFO Fetch successful Jan 30 13:07:03.716477 coreos-metadata[1679]: Jan 30 13:07:03.716 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:07:03.725776 coreos-metadata[1679]: Jan 30 13:07:03.725 INFO Fetch successful Jan 30 13:07:03.725776 coreos-metadata[1679]: Jan 30 13:07:03.725 INFO Fetching http://168.63.129.16/machine/14685bf8-ce82-4b48-859f-654f856534d9/f80b1b30%2D40ca%2D4a58%2Dacdd%2D0bd5956066ac.%5Fci%2D4186.1.0%2Da%2D551420da85?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:07:03.727620 coreos-metadata[1679]: Jan 30 13:07:03.727 INFO Fetch successful Jan 30 13:07:03.728890 coreos-metadata[1679]: Jan 30 13:07:03.728 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:07:03.738202 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:07:03.741491 systemd-logind[1693]: New seat seat0. Jan 30 13:07:03.742336 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:07:03.749775 coreos-metadata[1679]: Jan 30 13:07:03.749 INFO Fetch successful Jan 30 13:07:03.804048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1760) Jan 30 13:07:03.818626 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:07:03.823804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:07:03.830411 bash[1758]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:07:03.832434 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:07:03.844085 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:07:04.007604 locksmithd[1740]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:07:04.520590 sshd_keygen[1719]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:07:04.551301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:07:04.563359 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:07:04.572627 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:07:04.576742 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:07:04.577274 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:07:04.590781 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:07:04.605687 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:07:04.625417 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:07:04.631149 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:07:04.638781 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:07:04.651378 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:07:04.895055 containerd[1715]: time="2025-01-30T13:07:04.894905700Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:07:04.936122 containerd[1715]: time="2025-01-30T13:07:04.936066700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938215 containerd[1715]: time="2025-01-30T13:07:04.938174800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938316 containerd[1715]: time="2025-01-30T13:07:04.938215300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:07:04.938316 containerd[1715]: time="2025-01-30T13:07:04.938237400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:07:04.938417 containerd[1715]: time="2025-01-30T13:07:04.938396000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:07:04.938457 containerd[1715]: time="2025-01-30T13:07:04.938426200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938527 containerd[1715]: time="2025-01-30T13:07:04.938505500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938566 containerd[1715]: time="2025-01-30T13:07:04.938530600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938768 containerd[1715]: time="2025-01-30T13:07:04.938737200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938832 containerd[1715]: time="2025-01-30T13:07:04.938768500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938832 containerd[1715]: time="2025-01-30T13:07:04.938790200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938832 containerd[1715]: time="2025-01-30T13:07:04.938803300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.938922 containerd[1715]: time="2025-01-30T13:07:04.938902800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.940083 containerd[1715]: time="2025-01-30T13:07:04.939195800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:04.940083 containerd[1715]: time="2025-01-30T13:07:04.939359300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:04.940083 containerd[1715]: time="2025-01-30T13:07:04.939380200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:07:04.940083 containerd[1715]: time="2025-01-30T13:07:04.939485100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:07:04.940083 containerd[1715]: time="2025-01-30T13:07:04.939545600Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:07:04.947399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:04.953864 containerd[1715]: time="2025-01-30T13:07:04.953830800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954093600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954144100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954169000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954190900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954378800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954752500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954894400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954917800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954937900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954956100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954973000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955020 containerd[1715]: time="2025-01-30T13:07:04.954989800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955039600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955069800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955092100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955109700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955127800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955155200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955174800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955192300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955210800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955228900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955246900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955262600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955280700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955441 containerd[1715]: time="2025-01-30T13:07:04.955299700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955320000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955338900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955356500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955374600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955394100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955422000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955439700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955455300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955528600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955553400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955638300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955659400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:07:04.955877 containerd[1715]: time="2025-01-30T13:07:04.955673800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.957227 containerd[1715]: time="2025-01-30T13:07:04.955691800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:07:04.957227 containerd[1715]: time="2025-01-30T13:07:04.955705500Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:07:04.957227 containerd[1715]: time="2025-01-30T13:07:04.955720500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.956225800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.956303000Z" level=info msg="Connect containerd service" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.956352900Z" level=info msg="using legacy CRI server" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.956365500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.956526500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957252200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957405000Z" level=info msg="Start subscribing containerd event" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957455800Z" level=info msg="Start recovering state" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957531800Z" level=info msg="Start event monitor" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957551700Z" level=info msg="Start snapshots syncer" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957563400Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:07:04.957735 containerd[1715]: time="2025-01-30T13:07:04.957573700Z" level=info msg="Start streaming server" Jan 30 13:07:04.958516 containerd[1715]: time="2025-01-30T13:07:04.958431700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:07:04.958516 containerd[1715]: time="2025-01-30T13:07:04.958485400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:07:04.958764 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:07:04.959248 containerd[1715]: time="2025-01-30T13:07:04.959233000Z" level=info msg="containerd successfully booted in 0.065495s" Jan 30 13:07:04.962621 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:07:04.965322 systemd[1]: Startup finished in 753ms (firmware) + 27.012s (loader) + 1.058s (kernel) + 10.575s (initrd) + 11.246s (userspace) = 50.645s. Jan 30 13:07:05.056883 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:05.100077 agetty[1851]: failed to open credentials directory Jan 30 13:07:05.100313 agetty[1853]: failed to open credentials directory Jan 30 13:07:05.297115 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:07:05.298644 login[1853]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:07:05.312595 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:07:05.320279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:07:05.326280 systemd-logind[1693]: New session 2 of user core. Jan 30 13:07:05.335764 systemd-logind[1693]: New session 1 of user core. Jan 30 13:07:05.343766 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:07:05.352581 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:07:05.377962 (systemd)[1876]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:07:05.597883 systemd[1876]: Queued start job for default target default.target. Jan 30 13:07:05.608446 systemd[1876]: Created slice app.slice - User Application Slice. Jan 30 13:07:05.608485 systemd[1876]: Reached target paths.target - Paths. Jan 30 13:07:05.608504 systemd[1876]: Reached target timers.target - Timers. Jan 30 13:07:05.610670 systemd[1876]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:07:05.636233 systemd[1876]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:07:05.636382 systemd[1876]: Reached target sockets.target - Sockets. Jan 30 13:07:05.636403 systemd[1876]: Reached target basic.target - Basic System. Jan 30 13:07:05.636455 systemd[1876]: Reached target default.target - Main User Target. Jan 30 13:07:05.636491 systemd[1876]: Startup finished in 249ms. Jan 30 13:07:05.636703 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:07:05.641205 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:07:05.642300 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:07:05.782545 kubelet[1864]: E0130 13:07:05.782486 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:05.785953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:05.786730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:06.446293 waagent[1854]: 2025-01-30T13:07:06.446190Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:07:06.450132 waagent[1854]: 2025-01-30T13:07:06.450058Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 30 13:07:06.452337 waagent[1854]: 2025-01-30T13:07:06.452277Z INFO Daemon Daemon Python: 3.11.10 Jan 30 13:07:06.454817 waagent[1854]: 2025-01-30T13:07:06.454734Z INFO Daemon Daemon Run daemon Jan 30 13:07:06.458015 waagent[1854]: 2025-01-30T13:07:06.457867Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 30 13:07:06.463028 waagent[1854]: 2025-01-30T13:07:06.462954Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:07:06.466217 waagent[1854]: 2025-01-30T13:07:06.466163Z INFO Daemon Daemon Activate resource disk Jan 30 13:07:06.469107 waagent[1854]: 2025-01-30T13:07:06.469042Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:07:06.478064 waagent[1854]: 2025-01-30T13:07:06.477985Z INFO Daemon Daemon Found device: None Jan 30 13:07:06.480710 waagent[1854]: 2025-01-30T13:07:06.480642Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:07:06.485471 waagent[1854]: 2025-01-30T13:07:06.485403Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:07:06.492142 waagent[1854]: 2025-01-30T13:07:06.492083Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:07:06.495445 waagent[1854]: 2025-01-30T13:07:06.495382Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:07:06.506210 waagent[1854]: 2025-01-30T13:07:06.505818Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:07:06.513673 waagent[1854]: 2025-01-30T13:07:06.513617Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:07:06.522905 waagent[1854]: 2025-01-30T13:07:06.515355Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:07:06.522905 waagent[1854]: 2025-01-30T13:07:06.516313Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:07:06.600784 waagent[1854]: 2025-01-30T13:07:06.597243Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:07:06.627865 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:07:06.630052 waagent[1854]: 2025-01-30T13:07:06.629914Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:07:06.634336 waagent[1854]: 2025-01-30T13:07:06.634267Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:07:06.648822 waagent[1854]: 2025-01-30T13:07:06.635962Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:07:06.648822 waagent[1854]: 2025-01-30T13:07:06.636510Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:07:06.648822 waagent[1854]: 2025-01-30T13:07:06.637293Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:07:06.648822 waagent[1854]: 2025-01-30T13:07:06.637778Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:07:06.675623 waagent[1854]: 2025-01-30T13:07:06.675547Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:07:06.684659 waagent[1854]: 2025-01-30T13:07:06.677299Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:07:06.684659 waagent[1854]: 2025-01-30T13:07:06.677858Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:07:06.785098 waagent[1854]: 2025-01-30T13:07:06.784925Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:07:06.794922 waagent[1854]: 2025-01-30T13:07:06.786430Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:07:06.794922 waagent[1854]: 2025-01-30T13:07:06.790718Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:07:06.806140 waagent[1854]: 2025-01-30T13:07:06.806086Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 30 13:07:06.809957 waagent[1854]: 2025-01-30T13:07:06.809389Z INFO Daemon Jan 30 13:07:06.822213 waagent[1854]: 2025-01-30T13:07:06.810857Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f9c0cb70-6809-4805-830c-c14901b136e9 eTag: 16927233767570513335 source: Fabric] Jan 30 13:07:06.822213 waagent[1854]: 2025-01-30T13:07:06.812314Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:07:06.822213 waagent[1854]: 2025-01-30T13:07:06.813409Z INFO Daemon Jan 30 13:07:06.822213 waagent[1854]: 2025-01-30T13:07:06.814152Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:07:06.824749 waagent[1854]: 2025-01-30T13:07:06.824704Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:07:06.890935 waagent[1854]: 2025-01-30T13:07:06.890849Z INFO Daemon Downloaded certificate {'thumbprint': '17545798C6E8189223DE8953574B3C78ED5BB954', 'hasPrivateKey': True} Jan 30 13:07:06.896067 waagent[1854]: 2025-01-30T13:07:06.895985Z INFO Daemon Fetch goal state completed Jan 30 13:07:06.906155 waagent[1854]: 2025-01-30T13:07:06.906094Z INFO Daemon Daemon Starting provisioning Jan 30 13:07:06.913109 waagent[1854]: 2025-01-30T13:07:06.907728Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:07:06.913109 waagent[1854]: 2025-01-30T13:07:06.908602Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-551420da85] Jan 30 13:07:06.927819 waagent[1854]: 2025-01-30T13:07:06.927719Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-551420da85] Jan 30 13:07:06.935715 waagent[1854]: 2025-01-30T13:07:06.929098Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:07:06.935715 waagent[1854]: 2025-01-30T13:07:06.930060Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:07:06.957328 systemd-networkd[1511]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:06.957339 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:06.957388 systemd-networkd[1511]: eth0: DHCP lease lost Jan 30 13:07:06.958746 waagent[1854]: 2025-01-30T13:07:06.958656Z INFO Daemon Daemon Create user account if not exists Jan 30 13:07:06.962238 waagent[1854]: 2025-01-30T13:07:06.961023Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:07:06.962238 waagent[1854]: 2025-01-30T13:07:06.961415Z INFO Daemon Daemon Configure sudoer Jan 30 13:07:06.962238 waagent[1854]: 2025-01-30T13:07:06.961751Z INFO Daemon Daemon Configure sshd Jan 30 13:07:06.963016 waagent[1854]: 2025-01-30T13:07:06.962957Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:07:06.963630 waagent[1854]: 2025-01-30T13:07:06.963591Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:07:06.978104 systemd-networkd[1511]: eth0: DHCPv6 lease lost Jan 30 13:07:07.016351 systemd-networkd[1511]: eth0: DHCPv4 address 10.200.4.27/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:07:08.071625 waagent[1854]: 2025-01-30T13:07:08.071554Z INFO Daemon Daemon Provisioning complete Jan 30 13:07:08.082691 waagent[1854]: 2025-01-30T13:07:08.082625Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:07:08.089039 waagent[1854]: 2025-01-30T13:07:08.083735Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:07:08.089039 waagent[1854]: 2025-01-30T13:07:08.084475Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:07:08.208946 waagent[1931]: 2025-01-30T13:07:08.208839Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:07:08.209419 waagent[1931]: 2025-01-30T13:07:08.209027Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 30 13:07:08.209419 waagent[1931]: 2025-01-30T13:07:08.209121Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 30 13:07:08.257483 waagent[1931]: 2025-01-30T13:07:08.257387Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:07:08.257703 waagent[1931]: 2025-01-30T13:07:08.257655Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:07:08.257788 waagent[1931]: 2025-01-30T13:07:08.257756Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:07:08.264427 waagent[1931]: 2025-01-30T13:07:08.264365Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:07:08.269061 waagent[1931]: 2025-01-30T13:07:08.269012Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 30 13:07:08.269512 waagent[1931]: 2025-01-30T13:07:08.269461Z INFO ExtHandler Jan 30 13:07:08.269612 waagent[1931]: 2025-01-30T13:07:08.269549Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 20d4fbfc-0645-49a7-82ca-0ce92eb0ba67 eTag: 16927233767570513335 source: Fabric] Jan 30 13:07:08.269907 waagent[1931]: 2025-01-30T13:07:08.269860Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:07:08.270483 waagent[1931]: 2025-01-30T13:07:08.270427Z INFO ExtHandler Jan 30 13:07:08.270559 waagent[1931]: 2025-01-30T13:07:08.270512Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:07:08.273617 waagent[1931]: 2025-01-30T13:07:08.273572Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:07:08.332970 waagent[1931]: 2025-01-30T13:07:08.332824Z INFO ExtHandler Downloaded certificate {'thumbprint': '17545798C6E8189223DE8953574B3C78ED5BB954', 'hasPrivateKey': True} Jan 30 13:07:08.333485 waagent[1931]: 2025-01-30T13:07:08.333426Z INFO ExtHandler Fetch goal state completed Jan 30 13:07:08.345578 waagent[1931]: 2025-01-30T13:07:08.345512Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1931 Jan 30 13:07:08.345733 waagent[1931]: 2025-01-30T13:07:08.345684Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:07:08.347276 waagent[1931]: 2025-01-30T13:07:08.347216Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:07:08.347635 waagent[1931]: 2025-01-30T13:07:08.347584Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:07:08.531781 waagent[1931]: 2025-01-30T13:07:08.531728Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:07:08.532029 waagent[1931]: 2025-01-30T13:07:08.531963Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:07:08.539312 waagent[1931]: 2025-01-30T13:07:08.539185Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:07:08.546663 systemd[1]: Reloading requested from client PID 1944 ('systemctl') (unit waagent.service)... Jan 30 13:07:08.546681 systemd[1]: Reloading... Jan 30 13:07:08.635026 zram_generator::config[1974]: No configuration found. Jan 30 13:07:08.770538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:08.858612 systemd[1]: Reloading finished in 311 ms. Jan 30 13:07:08.889610 waagent[1931]: 2025-01-30T13:07:08.889500Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:07:08.897818 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit waagent.service)... Jan 30 13:07:08.897835 systemd[1]: Reloading... Jan 30 13:07:08.983461 zram_generator::config[2072]: No configuration found. Jan 30 13:07:09.111087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:09.201579 systemd[1]: Reloading finished in 303 ms. Jan 30 13:07:09.232991 waagent[1931]: 2025-01-30T13:07:09.232791Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:07:09.233509 waagent[1931]: 2025-01-30T13:07:09.233264Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:07:09.638612 waagent[1931]: 2025-01-30T13:07:09.638520Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:07:09.639184 waagent[1931]: 2025-01-30T13:07:09.639124Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:07:09.639937 waagent[1931]: 2025-01-30T13:07:09.639875Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:07:09.640083 waagent[1931]: 2025-01-30T13:07:09.640036Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:07:09.640233 waagent[1931]: 2025-01-30T13:07:09.640151Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:07:09.640603 waagent[1931]: 2025-01-30T13:07:09.640550Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:07:09.640720 waagent[1931]: 2025-01-30T13:07:09.640680Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:07:09.640824 waagent[1931]: 2025-01-30T13:07:09.640778Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:07:09.641123 waagent[1931]: 2025-01-30T13:07:09.641072Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:07:09.641414 waagent[1931]: 2025-01-30T13:07:09.641350Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:07:09.641490 waagent[1931]: 2025-01-30T13:07:09.641443Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:07:09.641737 waagent[1931]: 2025-01-30T13:07:09.641685Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:07:09.642178 waagent[1931]: 2025-01-30T13:07:09.642125Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:07:09.642346 waagent[1931]: 2025-01-30T13:07:09.642303Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:07:09.642464 waagent[1931]: 2025-01-30T13:07:09.642375Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:07:09.642464 waagent[1931]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:07:09.642464 waagent[1931]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:07:09.642464 waagent[1931]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:07:09.642464 waagent[1931]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:07:09.642464 waagent[1931]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:07:09.642464 waagent[1931]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:07:09.643015 waagent[1931]: 2025-01-30T13:07:09.642922Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:07:09.643306 waagent[1931]: 2025-01-30T13:07:09.643190Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:07:09.643408 waagent[1931]: 2025-01-30T13:07:09.643367Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:07:09.648722 waagent[1931]: 2025-01-30T13:07:09.648682Z INFO ExtHandler ExtHandler Jan 30 13:07:09.650134 waagent[1931]: 2025-01-30T13:07:09.650097Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 24b338d0-8fda-4fd9-9f49-22a49df5344c correlation 9e3334a6-849b-4f9c-a68f-ff6aeac66949 created: 2025-01-30T13:06:04.310333Z] Jan 30 13:07:09.650496 waagent[1931]: 2025-01-30T13:07:09.650451Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:07:09.650982 waagent[1931]: 2025-01-30T13:07:09.650939Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 30 13:07:09.697946 waagent[1931]: 2025-01-30T13:07:09.697805Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C8E2329E-1BA3-49F4-A147-212EDBC02385;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:07:09.761080 waagent[1931]: 2025-01-30T13:07:09.760974Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:07:09.761080 waagent[1931]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:07:09.761080 waagent[1931]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:07:09.761080 waagent[1931]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:a3:de brd ff:ff:ff:ff:ff:ff Jan 30 13:07:09.761080 waagent[1931]: 3: enP47194s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:a3:de brd ff:ff:ff:ff:ff:ff\ altname enP47194p0s2 Jan 30 13:07:09.761080 waagent[1931]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:07:09.761080 waagent[1931]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:07:09.761080 waagent[1931]: 2: eth0 inet 10.200.4.27/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:07:09.761080 waagent[1931]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:07:09.761080 waagent[1931]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:07:09.761080 waagent[1931]: 2: eth0 inet6 fe80::222:48ff:fe40:a3de/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:07:09.761080 waagent[1931]: 3: enP47194s1 inet6 fe80::222:48ff:fe40:a3de/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:07:09.788846 waagent[1931]: 2025-01-30T13:07:09.788782Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:07:09.788846 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.788846 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.788846 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.788846 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.788846 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.788846 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.788846 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:07:09.788846 waagent[1931]: 7 569 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:07:09.788846 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:07:09.792144 waagent[1931]: 2025-01-30T13:07:09.792085Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:07:09.792144 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.792144 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.792144 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.792144 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.792144 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:07:09.792144 waagent[1931]: pkts bytes target prot opt in out source destination Jan 30 13:07:09.792144 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:07:09.792144 waagent[1931]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:07:09.792144 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:07:09.792576 waagent[1931]: 2025-01-30T13:07:09.792436Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:07:16.037644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:07:16.043215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:16.151593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:16.156451 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:16.735670 kubelet[2165]: E0130 13:07:16.735567 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:16.739699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:16.739906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:26.813727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:07:26.820218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:26.908806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:26.913406 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:27.345653 chronyd[1708]: Selected source PHC0 Jan 30 13:07:27.547855 kubelet[2181]: E0130 13:07:27.547761 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:27.550364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:27.550558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:37.563599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:07:37.569221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:37.659653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:37.664291 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:38.292218 kubelet[2197]: E0130 13:07:38.292160 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:38.294777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:38.294970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:42.641487 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:07:42.646282 systemd[1]: Started sshd@0-10.200.4.27:22-10.200.16.10:57898.service - OpenSSH per-connection server daemon (10.200.16.10:57898). Jan 30 13:07:43.394567 sshd[2206]: Accepted publickey for core from 10.200.16.10 port 57898 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:43.396390 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:43.401303 systemd-logind[1693]: New session 3 of user core. Jan 30 13:07:43.411164 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:07:43.953266 systemd[1]: Started sshd@1-10.200.4.27:22-10.200.16.10:57900.service - OpenSSH per-connection server daemon (10.200.16.10:57900). Jan 30 13:07:44.604818 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 57900 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:44.606328 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:44.610394 systemd-logind[1693]: New session 4 of user core. Jan 30 13:07:44.621186 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:07:45.057537 sshd[2213]: Connection closed by 10.200.16.10 port 57900 Jan 30 13:07:45.058881 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:45.062161 systemd[1]: sshd@1-10.200.4.27:22-10.200.16.10:57900.service: Deactivated successfully. Jan 30 13:07:45.064418 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:07:45.066210 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:07:45.067393 systemd-logind[1693]: Removed session 4. Jan 30 13:07:45.178348 systemd[1]: Started sshd@2-10.200.4.27:22-10.200.16.10:57914.service - OpenSSH per-connection server daemon (10.200.16.10:57914). Jan 30 13:07:45.821009 sshd[2218]: Accepted publickey for core from 10.200.16.10 port 57914 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:45.822549 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:45.827313 systemd-logind[1693]: New session 5 of user core. Jan 30 13:07:45.834154 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:07:46.268815 sshd[2220]: Connection closed by 10.200.16.10 port 57914 Jan 30 13:07:46.270158 sshd-session[2218]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:46.275169 systemd[1]: sshd@2-10.200.4.27:22-10.200.16.10:57914.service: Deactivated successfully. Jan 30 13:07:46.277468 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:07:46.278392 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:07:46.279475 systemd-logind[1693]: Removed session 5. Jan 30 13:07:46.381081 systemd[1]: Started sshd@3-10.200.4.27:22-10.200.16.10:56562.service - OpenSSH per-connection server daemon (10.200.16.10:56562). Jan 30 13:07:46.952856 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 30 13:07:47.024564 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 56562 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:47.026228 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:47.031067 systemd-logind[1693]: New session 6 of user core. Jan 30 13:07:47.039154 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:07:47.478324 sshd[2227]: Connection closed by 10.200.16.10 port 56562 Jan 30 13:07:47.479165 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:47.482806 systemd[1]: sshd@3-10.200.4.27:22-10.200.16.10:56562.service: Deactivated successfully. Jan 30 13:07:47.484646 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:07:47.485473 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:07:47.486382 systemd-logind[1693]: Removed session 6. Jan 30 13:07:47.594127 systemd[1]: Started sshd@4-10.200.4.27:22-10.200.16.10:56578.service - OpenSSH per-connection server daemon (10.200.16.10:56578). Jan 30 13:07:48.235663 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 56578 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:48.237084 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:48.241780 systemd-logind[1693]: New session 7 of user core. Jan 30 13:07:48.251183 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:07:48.313562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:07:48.321232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:48.422607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:48.427420 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:48.467417 kubelet[2243]: E0130 13:07:48.467363 2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:48.469844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:48.470052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:49.086970 update_engine[1696]: I20250130 13:07:49.086862 1696 update_attempter.cc:509] Updating boot flags... Jan 30 13:07:49.151042 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2267) Jan 30 13:07:49.152910 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:07:49.153592 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:49.183704 sudo[2251]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:49.283281 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2268) Jan 30 13:07:49.289632 sshd[2234]: Connection closed by 10.200.16.10 port 56578 Jan 30 13:07:49.294341 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:49.304517 systemd[1]: sshd@4-10.200.4.27:22-10.200.16.10:56578.service: Deactivated successfully. Jan 30 13:07:49.307751 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:07:49.311038 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:07:49.315417 systemd-logind[1693]: Removed session 7. Jan 30 13:07:49.411456 systemd[1]: Started sshd@5-10.200.4.27:22-10.200.16.10:56590.service - OpenSSH per-connection server daemon (10.200.16.10:56590). Jan 30 13:07:49.435015 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2268) Jan 30 13:07:50.080623 sshd[2378]: Accepted publickey for core from 10.200.16.10 port 56590 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:50.082370 sshd-session[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:50.087947 systemd-logind[1693]: New session 8 of user core. Jan 30 13:07:50.094148 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:07:50.430410 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:07:50.430771 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:50.434261 sudo[2425]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:50.439223 sudo[2424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:07:50.439574 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:50.458452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:07:50.484167 augenrules[2447]: No rules Jan 30 13:07:50.485589 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:07:50.485841 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:07:50.487524 sudo[2424]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:50.593560 sshd[2423]: Connection closed by 10.200.16.10 port 56590 Jan 30 13:07:50.594474 sshd-session[2378]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:50.597882 systemd[1]: sshd@5-10.200.4.27:22-10.200.16.10:56590.service: Deactivated successfully. Jan 30 13:07:50.600012 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:07:50.601472 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:07:50.602495 systemd-logind[1693]: Removed session 8. Jan 30 13:07:50.705982 systemd[1]: Started sshd@6-10.200.4.27:22-10.200.16.10:56596.service - OpenSSH per-connection server daemon (10.200.16.10:56596). Jan 30 13:07:51.345799 sshd[2455]: Accepted publickey for core from 10.200.16.10 port 56596 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:51.347238 sshd-session[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:51.351274 systemd-logind[1693]: New session 9 of user core. Jan 30 13:07:51.361167 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:07:51.695711 sudo[2458]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:07:51.696097 sudo[2458]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:52.974273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:52.980289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:53.014175 systemd[1]: Reloading requested from client PID 2495 ('systemctl') (unit session-9.scope)... Jan 30 13:07:53.014204 systemd[1]: Reloading... Jan 30 13:07:53.132087 zram_generator::config[2530]: No configuration found. Jan 30 13:07:53.265455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:53.351654 systemd[1]: Reloading finished in 336 ms. Jan 30 13:07:53.618233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:07:53.618364 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:07:53.618692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:53.624438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:53.755852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:53.767450 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:07:53.809333 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:53.809333 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:07:53.809333 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:53.809888 kubelet[2603]: I0130 13:07:53.809386 2603 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:07:54.666083 kubelet[2603]: I0130 13:07:54.666041 2603 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:07:54.666083 kubelet[2603]: I0130 13:07:54.666071 2603 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:07:54.666341 kubelet[2603]: I0130 13:07:54.666320 2603 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:07:54.681195 kubelet[2603]: I0130 13:07:54.680901 2603 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:07:54.697603 kubelet[2603]: I0130 13:07:54.697579 2603 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:07:54.698928 kubelet[2603]: I0130 13:07:54.698882 2603 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:07:54.699172 kubelet[2603]: I0130 13:07:54.698984 2603 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:07:54.699551 kubelet[2603]: I0130 13:07:54.699530 2603 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:07:54.699607 kubelet[2603]: I0130 13:07:54.699556 2603 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:07:54.699706 kubelet[2603]: I0130 13:07:54.699688 2603 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:07:54.700435 kubelet[2603]: I0130 13:07:54.700414 2603 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:07:54.700435 kubelet[2603]: I0130 13:07:54.700435 2603 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:07:54.700536 kubelet[2603]: I0130 13:07:54.700459 2603 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:07:54.700536 kubelet[2603]: I0130 13:07:54.700483 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:07:54.701606 kubelet[2603]: E0130 13:07:54.701584 2603 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:54.702006 kubelet[2603]: E0130 13:07:54.701980 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:54.704625 kubelet[2603]: I0130 13:07:54.704460 2603 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:07:54.705752 kubelet[2603]: I0130 13:07:54.705734 2603 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:07:54.707523 kubelet[2603]: W0130 13:07:54.707285 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:07:54.707610 kubelet[2603]: E0130 13:07:54.707536 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:07:54.707610 kubelet[2603]: W0130 13:07:54.707365 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.4.27" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:07:54.707610 kubelet[2603]: E0130 13:07:54.707558 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.4.27" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:07:54.707610 kubelet[2603]: W0130 13:07:54.707501 2603 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:07:54.708249 kubelet[2603]: I0130 13:07:54.708184 2603 server.go:1264] "Started kubelet" Jan 30 13:07:54.708590 kubelet[2603]: I0130 13:07:54.708557 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:07:54.709878 kubelet[2603]: I0130 13:07:54.709695 2603 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:07:54.710872 kubelet[2603]: I0130 13:07:54.710610 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:07:54.710950 kubelet[2603]: I0130 13:07:54.710881 2603 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:07:54.712007 kubelet[2603]: I0130 13:07:54.711964 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:07:54.720019 kubelet[2603]: I0130 13:07:54.719825 2603 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:07:54.721290 kubelet[2603]: I0130 13:07:54.721263 2603 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:07:54.721942 kubelet[2603]: I0130 13:07:54.721913 2603 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:07:54.730823 kubelet[2603]: I0130 13:07:54.730465 2603 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:07:54.730823 kubelet[2603]: I0130 13:07:54.730593 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:07:54.731813 kubelet[2603]: E0130 13:07:54.731752 2603 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.27\" not found" node="10.200.4.27" Jan 30 13:07:54.732472 kubelet[2603]: E0130 13:07:54.732376 2603 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:07:54.733194 kubelet[2603]: I0130 13:07:54.733173 2603 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:07:54.754551 kubelet[2603]: I0130 13:07:54.754519 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:07:54.754551 kubelet[2603]: I0130 13:07:54.754543 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:07:54.754853 kubelet[2603]: I0130 13:07:54.754581 2603 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:07:54.775894 kubelet[2603]: I0130 13:07:54.775851 2603 policy_none.go:49] "None policy: Start" Jan 30 13:07:54.776664 kubelet[2603]: I0130 13:07:54.776640 2603 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:07:54.776768 kubelet[2603]: I0130 13:07:54.776680 2603 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:07:54.785817 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:07:54.796832 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:07:54.799890 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:07:54.812058 kubelet[2603]: I0130 13:07:54.810901 2603 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:07:54.812058 kubelet[2603]: I0130 13:07:54.811185 2603 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:07:54.812058 kubelet[2603]: I0130 13:07:54.811392 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:07:54.813944 kubelet[2603]: E0130 13:07:54.813855 2603 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.27\" not found" Jan 30 13:07:54.820224 kubelet[2603]: I0130 13:07:54.820117 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:07:54.821598 kubelet[2603]: I0130 13:07:54.821570 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:07:54.821598 kubelet[2603]: I0130 13:07:54.821588 2603 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:07:54.821694 kubelet[2603]: I0130 13:07:54.821606 2603 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:07:54.821694 kubelet[2603]: E0130 13:07:54.821660 2603 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:07:54.823439 kubelet[2603]: I0130 13:07:54.823145 2603 kubelet_node_status.go:73] "Attempting to register node" node="10.200.4.27" Jan 30 13:07:54.829578 kubelet[2603]: I0130 13:07:54.829562 2603 kubelet_node_status.go:76] "Successfully registered node" node="10.200.4.27" Jan 30 13:07:54.847441 kubelet[2603]: E0130 13:07:54.847396 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:54.948682 kubelet[2603]: E0130 13:07:54.948524 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.049151 kubelet[2603]: E0130 13:07:55.049081 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.150160 kubelet[2603]: E0130 13:07:55.150104 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.251427 kubelet[2603]: E0130 13:07:55.251275 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.272415 sudo[2458]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:55.351550 kubelet[2603]: E0130 13:07:55.351486 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.374136 sshd[2457]: Connection closed by 10.200.16.10 port 56596 Jan 30 13:07:55.374839 sshd-session[2455]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:55.379350 systemd[1]: sshd@6-10.200.4.27:22-10.200.16.10:56596.service: Deactivated successfully. Jan 30 13:07:55.381584 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:07:55.382747 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:07:55.384054 systemd-logind[1693]: Removed session 9. Jan 30 13:07:55.452125 kubelet[2603]: E0130 13:07:55.452066 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.552887 kubelet[2603]: E0130 13:07:55.552731 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.653399 kubelet[2603]: E0130 13:07:55.653344 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.668631 kubelet[2603]: I0130 13:07:55.668568 2603 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:07:55.669021 kubelet[2603]: W0130 13:07:55.668824 2603 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:07:55.669021 kubelet[2603]: W0130 13:07:55.668877 2603 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:07:55.703707 kubelet[2603]: E0130 13:07:55.703634 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:55.754521 kubelet[2603]: E0130 13:07:55.754461 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.854715 kubelet[2603]: E0130 13:07:55.854653 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:55.955197 kubelet[2603]: E0130 13:07:55.955138 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:56.055804 kubelet[2603]: E0130 13:07:56.055746 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.27\" not found" Jan 30 13:07:56.157286 kubelet[2603]: I0130 13:07:56.157168 2603 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:07:56.157800 containerd[1715]: time="2025-01-30T13:07:56.157720880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:07:56.158240 kubelet[2603]: I0130 13:07:56.158025 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:07:56.703861 kubelet[2603]: I0130 13:07:56.703800 2603 apiserver.go:52] "Watching apiserver" Jan 30 13:07:56.704124 kubelet[2603]: E0130 13:07:56.703806 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:56.708081 kubelet[2603]: I0130 13:07:56.707973 2603 topology_manager.go:215] "Topology Admit Handler" podUID="1565831f-d766-463d-9123-bbb867cb766e" podNamespace="calico-system" podName="calico-node-nz878" Jan 30 13:07:56.708200 kubelet[2603]: I0130 13:07:56.708156 2603 topology_manager.go:215] "Topology Admit Handler" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" podNamespace="calico-system" podName="csi-node-driver-pp95d" Jan 30 13:07:56.709276 kubelet[2603]: I0130 13:07:56.708261 2603 topology_manager.go:215] "Topology Admit Handler" podUID="487674cf-6e61-4dce-8825-f6f927950acd" podNamespace="kube-system" podName="kube-proxy-g79vn" Jan 30 13:07:56.709276 kubelet[2603]: E0130 13:07:56.708476 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:07:56.719590 systemd[1]: Created slice kubepods-besteffort-pod487674cf_6e61_4dce_8825_f6f927950acd.slice - libcontainer container kubepods-besteffort-pod487674cf_6e61_4dce_8825_f6f927950acd.slice. Jan 30 13:07:56.724825 kubelet[2603]: I0130 13:07:56.724762 2603 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:07:56.733224 kubelet[2603]: I0130 13:07:56.733085 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-var-run-calico\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733224 kubelet[2603]: I0130 13:07:56.733133 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-cni-log-dir\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733224 kubelet[2603]: I0130 13:07:56.733161 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/088dc3c1-e9d0-46ba-ae12-4f7130d43480-registration-dir\") pod \"csi-node-driver-pp95d\" (UID: \"088dc3c1-e9d0-46ba-ae12-4f7130d43480\") " pod="calico-system/csi-node-driver-pp95d" Jan 30 13:07:56.733224 kubelet[2603]: I0130 13:07:56.733186 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/487674cf-6e61-4dce-8825-f6f927950acd-lib-modules\") pod \"kube-proxy-g79vn\" (UID: \"487674cf-6e61-4dce-8825-f6f927950acd\") " pod="kube-system/kube-proxy-g79vn" Jan 30 13:07:56.733224 kubelet[2603]: I0130 13:07:56.733208 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1565831f-d766-463d-9123-bbb867cb766e-tigera-ca-bundle\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733501 kubelet[2603]: I0130 13:07:56.733229 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8666j\" (UniqueName: \"kubernetes.io/projected/1565831f-d766-463d-9123-bbb867cb766e-kube-api-access-8666j\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733501 kubelet[2603]: I0130 13:07:56.733250 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/088dc3c1-e9d0-46ba-ae12-4f7130d43480-varrun\") pod \"csi-node-driver-pp95d\" (UID: \"088dc3c1-e9d0-46ba-ae12-4f7130d43480\") " pod="calico-system/csi-node-driver-pp95d" Jan 30 13:07:56.733501 kubelet[2603]: I0130 13:07:56.733270 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/487674cf-6e61-4dce-8825-f6f927950acd-kube-proxy\") pod \"kube-proxy-g79vn\" (UID: \"487674cf-6e61-4dce-8825-f6f927950acd\") " pod="kube-system/kube-proxy-g79vn" Jan 30 13:07:56.733501 kubelet[2603]: I0130 13:07:56.733290 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhms\" (UniqueName: \"kubernetes.io/projected/487674cf-6e61-4dce-8825-f6f927950acd-kube-api-access-fdhms\") pod \"kube-proxy-g79vn\" (UID: \"487674cf-6e61-4dce-8825-f6f927950acd\") " pod="kube-system/kube-proxy-g79vn" Jan 30 13:07:56.733501 kubelet[2603]: I0130 13:07:56.733309 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-xtables-lock\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733680 kubelet[2603]: I0130 13:07:56.733330 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-policysync\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733680 kubelet[2603]: I0130 13:07:56.733350 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-var-lib-calico\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733680 kubelet[2603]: I0130 13:07:56.733373 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-cni-net-dir\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733680 kubelet[2603]: I0130 13:07:56.733394 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-flexvol-driver-host\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733680 kubelet[2603]: I0130 13:07:56.733417 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/088dc3c1-e9d0-46ba-ae12-4f7130d43480-socket-dir\") pod \"csi-node-driver-pp95d\" (UID: \"088dc3c1-e9d0-46ba-ae12-4f7130d43480\") " pod="calico-system/csi-node-driver-pp95d" Jan 30 13:07:56.733863 kubelet[2603]: I0130 13:07:56.733438 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/487674cf-6e61-4dce-8825-f6f927950acd-xtables-lock\") pod \"kube-proxy-g79vn\" (UID: \"487674cf-6e61-4dce-8825-f6f927950acd\") " pod="kube-system/kube-proxy-g79vn" Jan 30 13:07:56.733863 kubelet[2603]: I0130 13:07:56.733459 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-lib-modules\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733863 kubelet[2603]: I0130 13:07:56.733483 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1565831f-d766-463d-9123-bbb867cb766e-node-certs\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733863 kubelet[2603]: I0130 13:07:56.733505 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1565831f-d766-463d-9123-bbb867cb766e-cni-bin-dir\") pod \"calico-node-nz878\" (UID: \"1565831f-d766-463d-9123-bbb867cb766e\") " pod="calico-system/calico-node-nz878" Jan 30 13:07:56.733863 kubelet[2603]: I0130 13:07:56.733527 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/088dc3c1-e9d0-46ba-ae12-4f7130d43480-kubelet-dir\") pod \"csi-node-driver-pp95d\" (UID: \"088dc3c1-e9d0-46ba-ae12-4f7130d43480\") " pod="calico-system/csi-node-driver-pp95d" Jan 30 13:07:56.734084 kubelet[2603]: I0130 13:07:56.733550 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fh4b\" (UniqueName: \"kubernetes.io/projected/088dc3c1-e9d0-46ba-ae12-4f7130d43480-kube-api-access-7fh4b\") pod \"csi-node-driver-pp95d\" (UID: \"088dc3c1-e9d0-46ba-ae12-4f7130d43480\") " pod="calico-system/csi-node-driver-pp95d" Jan 30 13:07:56.734389 systemd[1]: Created slice kubepods-besteffort-pod1565831f_d766_463d_9123_bbb867cb766e.slice - libcontainer container kubepods-besteffort-pod1565831f_d766_463d_9123_bbb867cb766e.slice. Jan 30 13:07:56.837694 kubelet[2603]: E0130 13:07:56.837524 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.837694 kubelet[2603]: W0130 13:07:56.837553 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.837694 kubelet[2603]: E0130 13:07:56.837585 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.838249 kubelet[2603]: E0130 13:07:56.838136 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.838249 kubelet[2603]: W0130 13:07:56.838153 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.838249 kubelet[2603]: E0130 13:07:56.838168 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.838677 kubelet[2603]: E0130 13:07:56.838553 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.838677 kubelet[2603]: W0130 13:07:56.838569 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.838677 kubelet[2603]: E0130 13:07:56.838591 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.839022 kubelet[2603]: E0130 13:07:56.838989 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.839212 kubelet[2603]: W0130 13:07:56.839102 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.839212 kubelet[2603]: E0130 13:07:56.839122 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.839454 kubelet[2603]: E0130 13:07:56.839441 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.839641 kubelet[2603]: W0130 13:07:56.839537 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.839641 kubelet[2603]: E0130 13:07:56.839558 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.839881 kubelet[2603]: E0130 13:07:56.839866 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.842006 kubelet[2603]: W0130 13:07:56.839951 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.842006 kubelet[2603]: E0130 13:07:56.839973 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.852609 kubelet[2603]: E0130 13:07:56.852464 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.852609 kubelet[2603]: W0130 13:07:56.852485 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.852609 kubelet[2603]: E0130 13:07:56.852523 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.856234 kubelet[2603]: E0130 13:07:56.856103 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.856234 kubelet[2603]: W0130 13:07:56.856124 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.857048 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.857193 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860022 kubelet[2603]: W0130 13:07:56.858416 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.858503 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.858858 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860022 kubelet[2603]: W0130 13:07:56.858870 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.858954 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.859174 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860022 kubelet[2603]: W0130 13:07:56.859184 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860022 kubelet[2603]: E0130 13:07:56.859270 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.859631 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860700 kubelet[2603]: W0130 13:07:56.859641 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.859819 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860700 kubelet[2603]: W0130 13:07:56.859829 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.859841 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.859960 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.860258 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.860700 kubelet[2603]: W0130 13:07:56.860269 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.860700 kubelet[2603]: E0130 13:07:56.860282 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.861407 kubelet[2603]: E0130 13:07:56.861364 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.861564 kubelet[2603]: W0130 13:07:56.861494 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.861564 kubelet[2603]: E0130 13:07:56.861513 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:56.869063 kubelet[2603]: E0130 13:07:56.868080 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:07:56.869218 kubelet[2603]: W0130 13:07:56.869163 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:07:56.869218 kubelet[2603]: E0130 13:07:56.869187 2603 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:07:57.031478 containerd[1715]: time="2025-01-30T13:07:57.031333710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g79vn,Uid:487674cf-6e61-4dce-8825-f6f927950acd,Namespace:kube-system,Attempt:0,}" Jan 30 13:07:57.037219 containerd[1715]: time="2025-01-30T13:07:57.037170862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nz878,Uid:1565831f-d766-463d-9123-bbb867cb766e,Namespace:calico-system,Attempt:0,}" Jan 30 13:07:57.704847 kubelet[2603]: E0130 13:07:57.704793 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:57.787874 containerd[1715]: time="2025-01-30T13:07:57.787813789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:57.792848 containerd[1715]: time="2025-01-30T13:07:57.792811334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:57.795256 containerd[1715]: time="2025-01-30T13:07:57.795209656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:07:57.797811 containerd[1715]: time="2025-01-30T13:07:57.797759579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:07:57.803239 containerd[1715]: time="2025-01-30T13:07:57.803183527Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:57.807739 containerd[1715]: time="2025-01-30T13:07:57.807683467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:57.809082 containerd[1715]: time="2025-01-30T13:07:57.808523975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 777.052064ms" Jan 30 13:07:57.813407 containerd[1715]: time="2025-01-30T13:07:57.813370618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 776.091855ms" Jan 30 13:07:57.844593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534226157.mount: Deactivated successfully. Jan 30 13:07:58.705566 kubelet[2603]: E0130 13:07:58.705481 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:07:58.823019 kubelet[2603]: E0130 13:07:58.822413 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:07:58.839700 containerd[1715]: time="2025-01-30T13:07:58.838421705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:07:58.839700 containerd[1715]: time="2025-01-30T13:07:58.838490306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:07:58.839700 containerd[1715]: time="2025-01-30T13:07:58.838511406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:58.839700 containerd[1715]: time="2025-01-30T13:07:58.838602107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:58.840678 containerd[1715]: time="2025-01-30T13:07:58.836308786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:07:58.841234 containerd[1715]: time="2025-01-30T13:07:58.840665625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:07:58.841234 containerd[1715]: time="2025-01-30T13:07:58.841208730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:58.841548 containerd[1715]: time="2025-01-30T13:07:58.841502333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:59.398173 systemd[1]: Started cri-containerd-1a83c7706e8687467efe68dd6719766a0bc54f6caa01b31293849ea85ce1aaab.scope - libcontainer container 1a83c7706e8687467efe68dd6719766a0bc54f6caa01b31293849ea85ce1aaab. Jan 30 13:07:59.402415 systemd[1]: Started cri-containerd-e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f.scope - libcontainer container e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f. Jan 30 13:07:59.437416 containerd[1715]: time="2025-01-30T13:07:59.437373328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nz878,Uid:1565831f-d766-463d-9123-bbb867cb766e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\"" Jan 30 13:07:59.440076 containerd[1715]: time="2025-01-30T13:07:59.440038950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g79vn,Uid:487674cf-6e61-4dce-8825-f6f927950acd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a83c7706e8687467efe68dd6719766a0bc54f6caa01b31293849ea85ce1aaab\"" Jan 30 13:07:59.441490 containerd[1715]: time="2025-01-30T13:07:59.441412761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:07:59.706827 kubelet[2603]: E0130 13:07:59.706651 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:00.597987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571681053.mount: Deactivated successfully. Jan 30 13:08:00.706949 kubelet[2603]: E0130 13:08:00.706906 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:00.823170 kubelet[2603]: E0130 13:08:00.823126 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:00.889823 containerd[1715]: time="2025-01-30T13:08:00.889689912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:00.891981 containerd[1715]: time="2025-01-30T13:08:00.891920330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:08:00.894581 containerd[1715]: time="2025-01-30T13:08:00.894524851Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:00.899135 containerd[1715]: time="2025-01-30T13:08:00.899060887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:00.900213 containerd[1715]: time="2025-01-30T13:08:00.899634892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.458187431s" Jan 30 13:08:00.900213 containerd[1715]: time="2025-01-30T13:08:00.899675192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:08:00.901274 containerd[1715]: time="2025-01-30T13:08:00.900830101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:08:00.902572 containerd[1715]: time="2025-01-30T13:08:00.902543015Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:08:00.937594 containerd[1715]: time="2025-01-30T13:08:00.937548494Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d\"" Jan 30 13:08:00.938396 containerd[1715]: time="2025-01-30T13:08:00.938340500Z" level=info msg="StartContainer for \"b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d\"" Jan 30 13:08:00.975174 systemd[1]: Started cri-containerd-b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d.scope - libcontainer container b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d. Jan 30 13:08:01.009486 containerd[1715]: time="2025-01-30T13:08:01.009423267Z" level=info msg="StartContainer for \"b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d\" returns successfully" Jan 30 13:08:01.019074 systemd[1]: cri-containerd-b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d.scope: Deactivated successfully. Jan 30 13:08:01.267233 containerd[1715]: time="2025-01-30T13:08:01.267027522Z" level=info msg="shim disconnected" id=b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d namespace=k8s.io Jan 30 13:08:01.267233 containerd[1715]: time="2025-01-30T13:08:01.267092723Z" level=warning msg="cleaning up after shim disconnected" id=b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d namespace=k8s.io Jan 30 13:08:01.267233 containerd[1715]: time="2025-01-30T13:08:01.267105023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:01.558828 systemd[1]: run-containerd-runc-k8s.io-b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d-runc.dHkcYw.mount: Deactivated successfully. Jan 30 13:08:01.558965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b126da6f13737c6ae9309741c6738c73ff9746b850ebe04b7e7e1e370a95061d-rootfs.mount: Deactivated successfully. Jan 30 13:08:01.708118 kubelet[2603]: E0130 13:08:01.708065 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:02.402864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910118512.mount: Deactivated successfully. Jan 30 13:08:02.709719 kubelet[2603]: E0130 13:08:02.709011 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:02.824115 kubelet[2603]: E0130 13:08:02.823327 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:02.878331 containerd[1715]: time="2025-01-30T13:08:02.878275274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:02.880503 containerd[1715]: time="2025-01-30T13:08:02.880444991Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 13:08:02.884043 containerd[1715]: time="2025-01-30T13:08:02.883975519Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:02.887224 containerd[1715]: time="2025-01-30T13:08:02.887194245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:02.887953 containerd[1715]: time="2025-01-30T13:08:02.887810350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.986947549s" Jan 30 13:08:02.887953 containerd[1715]: time="2025-01-30T13:08:02.887845850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:08:02.889419 containerd[1715]: time="2025-01-30T13:08:02.889202661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:08:02.890485 containerd[1715]: time="2025-01-30T13:08:02.890450471Z" level=info msg="CreateContainer within sandbox \"1a83c7706e8687467efe68dd6719766a0bc54f6caa01b31293849ea85ce1aaab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:08:02.932097 containerd[1715]: time="2025-01-30T13:08:02.932048103Z" level=info msg="CreateContainer within sandbox \"1a83c7706e8687467efe68dd6719766a0bc54f6caa01b31293849ea85ce1aaab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1c01e7dba3c271499b0918cc72fe082714cf0c0e04b6b3c806e569e2506c1f4\"" Jan 30 13:08:02.932706 containerd[1715]: time="2025-01-30T13:08:02.932634807Z" level=info msg="StartContainer for \"c1c01e7dba3c271499b0918cc72fe082714cf0c0e04b6b3c806e569e2506c1f4\"" Jan 30 13:08:02.969318 systemd[1]: Started cri-containerd-c1c01e7dba3c271499b0918cc72fe082714cf0c0e04b6b3c806e569e2506c1f4.scope - libcontainer container c1c01e7dba3c271499b0918cc72fe082714cf0c0e04b6b3c806e569e2506c1f4. Jan 30 13:08:03.002341 containerd[1715]: time="2025-01-30T13:08:03.002224662Z" level=info msg="StartContainer for \"c1c01e7dba3c271499b0918cc72fe082714cf0c0e04b6b3c806e569e2506c1f4\" returns successfully" Jan 30 13:08:03.709789 kubelet[2603]: E0130 13:08:03.709738 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:03.855183 kubelet[2603]: I0130 13:08:03.855129 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g79vn" podStartSLOduration=6.407350765 podStartE2EDuration="9.855109565s" podCreationTimestamp="2025-01-30 13:07:54 +0000 UTC" firstStartedPulling="2025-01-30 13:07:59.441128658 +0000 UTC m=+5.668472854" lastFinishedPulling="2025-01-30 13:08:02.888887458 +0000 UTC m=+9.116231654" observedRunningTime="2025-01-30 13:08:03.85448256 +0000 UTC m=+10.081826656" watchObservedRunningTime="2025-01-30 13:08:03.855109565 +0000 UTC m=+10.082453761" Jan 30 13:08:04.710647 kubelet[2603]: E0130 13:08:04.710593 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:04.823890 kubelet[2603]: E0130 13:08:04.823847 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:05.711329 kubelet[2603]: E0130 13:08:05.711261 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:06.712458 kubelet[2603]: E0130 13:08:06.712407 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:06.801764 containerd[1715]: time="2025-01-30T13:08:06.801702897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:06.803854 containerd[1715]: time="2025-01-30T13:08:06.803761116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:08:06.806446 containerd[1715]: time="2025-01-30T13:08:06.806379840Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:06.811016 containerd[1715]: time="2025-01-30T13:08:06.810835880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:06.812035 containerd[1715]: time="2025-01-30T13:08:06.811475886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.922235824s" Jan 30 13:08:06.812035 containerd[1715]: time="2025-01-30T13:08:06.811511586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:08:06.815304 containerd[1715]: time="2025-01-30T13:08:06.815215619Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:08:06.822924 kubelet[2603]: E0130 13:08:06.822531 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:06.849620 containerd[1715]: time="2025-01-30T13:08:06.849575529Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda\"" Jan 30 13:08:06.850014 containerd[1715]: time="2025-01-30T13:08:06.849972833Z" level=info msg="StartContainer for \"1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda\"" Jan 30 13:08:06.887163 systemd[1]: Started cri-containerd-1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda.scope - libcontainer container 1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda. Jan 30 13:08:06.972776 containerd[1715]: time="2025-01-30T13:08:06.972598140Z" level=info msg="StartContainer for \"1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda\" returns successfully" Jan 30 13:08:07.713201 kubelet[2603]: E0130 13:08:07.713134 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:08.333690 containerd[1715]: time="2025-01-30T13:08:08.333639632Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:08:08.335668 systemd[1]: cri-containerd-1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda.scope: Deactivated successfully. Jan 30 13:08:08.356467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda-rootfs.mount: Deactivated successfully. Jan 30 13:08:08.360951 kubelet[2603]: I0130 13:08:08.360924 2603 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:08:08.713489 kubelet[2603]: E0130 13:08:08.713337 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:08.829611 systemd[1]: Created slice kubepods-besteffort-pod088dc3c1_e9d0_46ba_ae12_4f7130d43480.slice - libcontainer container kubepods-besteffort-pod088dc3c1_e9d0_46ba_ae12_4f7130d43480.slice. Jan 30 13:08:08.831831 containerd[1715]: time="2025-01-30T13:08:08.831799244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:0,}" Jan 30 13:08:09.714413 kubelet[2603]: E0130 13:08:09.714355 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:10.438152 kubelet[2603]: I0130 13:08:10.438103 2603 topology_manager.go:215] "Topology Admit Handler" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" podNamespace="default" podName="nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:10.444403 systemd[1]: Created slice kubepods-besteffort-podebc4461e_e7a5_4263_90a0_8506b558b6e6.slice - libcontainer container kubepods-besteffort-podebc4461e_e7a5_4263_90a0_8506b558b6e6.slice. Jan 30 13:08:10.526619 kubelet[2603]: I0130 13:08:10.526562 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrm4\" (UniqueName: \"kubernetes.io/projected/ebc4461e-e7a5-4263-90a0-8506b558b6e6-kube-api-access-8zrm4\") pod \"nginx-deployment-85f456d6dd-cfjvv\" (UID: \"ebc4461e-e7a5-4263-90a0-8506b558b6e6\") " pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:10.685287 containerd[1715]: time="2025-01-30T13:08:10.685208296Z" level=info msg="shim disconnected" id=1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda namespace=k8s.io Jan 30 13:08:10.685287 containerd[1715]: time="2025-01-30T13:08:10.685281397Z" level=warning msg="cleaning up after shim disconnected" id=1c394e049a369826d2e92ee1473805cb6d348b6c234aaa2a6126ca9500a7acda namespace=k8s.io Jan 30 13:08:10.685287 containerd[1715]: time="2025-01-30T13:08:10.685293297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:10.716288 kubelet[2603]: E0130 13:08:10.715175 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:10.748828 containerd[1715]: time="2025-01-30T13:08:10.748785795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:0,}" Jan 30 13:08:10.755800 containerd[1715]: time="2025-01-30T13:08:10.755755750Z" level=error msg="Failed to destroy network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.759272 containerd[1715]: time="2025-01-30T13:08:10.757518664Z" level=error msg="encountered an error cleaning up failed sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.759272 containerd[1715]: time="2025-01-30T13:08:10.757612865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.759580 kubelet[2603]: E0130 13:08:10.759510 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.760414 kubelet[2603]: E0130 13:08:10.760330 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:10.760553 kubelet[2603]: E0130 13:08:10.760531 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:10.761452 kubelet[2603]: E0130 13:08:10.760666 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:10.762604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462-shm.mount: Deactivated successfully. Jan 30 13:08:10.836037 containerd[1715]: time="2025-01-30T13:08:10.835969580Z" level=error msg="Failed to destroy network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.836366 containerd[1715]: time="2025-01-30T13:08:10.836315782Z" level=error msg="encountered an error cleaning up failed sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.836491 containerd[1715]: time="2025-01-30T13:08:10.836402883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.836665 kubelet[2603]: E0130 13:08:10.836620 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:10.836967 kubelet[2603]: E0130 13:08:10.836689 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:10.836967 kubelet[2603]: E0130 13:08:10.836714 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:10.837374 kubelet[2603]: E0130 13:08:10.837335 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:10.861545 containerd[1715]: time="2025-01-30T13:08:10.860709274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:08:10.861799 kubelet[2603]: I0130 13:08:10.861130 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77" Jan 30 13:08:10.862479 containerd[1715]: time="2025-01-30T13:08:10.862097285Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:10.862479 containerd[1715]: time="2025-01-30T13:08:10.862324487Z" level=info msg="Ensure that sandbox 6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77 in task-service has been cleanup successfully" Jan 30 13:08:10.862616 kubelet[2603]: I0130 13:08:10.862132 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462" Jan 30 13:08:10.862782 containerd[1715]: time="2025-01-30T13:08:10.862748690Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:10.862933 containerd[1715]: time="2025-01-30T13:08:10.862865291Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:10.863137 containerd[1715]: time="2025-01-30T13:08:10.862828091Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:10.863643 containerd[1715]: time="2025-01-30T13:08:10.863612197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:1,}" Jan 30 13:08:10.864234 containerd[1715]: time="2025-01-30T13:08:10.863940899Z" level=info msg="Ensure that sandbox c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462 in task-service has been cleanup successfully" Jan 30 13:08:10.865052 containerd[1715]: time="2025-01-30T13:08:10.864358303Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:10.865052 containerd[1715]: time="2025-01-30T13:08:10.864382703Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:10.865317 containerd[1715]: time="2025-01-30T13:08:10.865292010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:1,}" Jan 30 13:08:11.007042 containerd[1715]: time="2025-01-30T13:08:11.005710413Z" level=error msg="Failed to destroy network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007042 containerd[1715]: time="2025-01-30T13:08:11.006252117Z" level=error msg="Failed to destroy network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007042 containerd[1715]: time="2025-01-30T13:08:11.006665220Z" level=error msg="encountered an error cleaning up failed sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007042 containerd[1715]: time="2025-01-30T13:08:11.006829521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007042 containerd[1715]: time="2025-01-30T13:08:11.006918822Z" level=error msg="encountered an error cleaning up failed sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007509 containerd[1715]: time="2025-01-30T13:08:11.007041523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007708 kubelet[2603]: E0130 13:08:11.007338 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007708 kubelet[2603]: E0130 13:08:11.007410 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:11.007708 kubelet[2603]: E0130 13:08:11.007440 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:11.007852 kubelet[2603]: E0130 13:08:11.007494 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:11.007852 kubelet[2603]: E0130 13:08:11.007338 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:11.007852 kubelet[2603]: E0130 13:08:11.007576 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:11.008442 kubelet[2603]: E0130 13:08:11.007600 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:11.008442 kubelet[2603]: E0130 13:08:11.008350 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:11.640750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77-shm.mount: Deactivated successfully. Jan 30 13:08:11.640864 systemd[1]: run-netns-cni\x2d66989cad\x2dc864\x2de437\x2da276\x2d475fb352ff91.mount: Deactivated successfully. Jan 30 13:08:11.715417 kubelet[2603]: E0130 13:08:11.715359 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:11.866050 kubelet[2603]: I0130 13:08:11.866012 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6" Jan 30 13:08:11.869156 containerd[1715]: time="2025-01-30T13:08:11.866737273Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:11.869156 containerd[1715]: time="2025-01-30T13:08:11.866975775Z" level=info msg="Ensure that sandbox 2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6 in task-service has been cleanup successfully" Jan 30 13:08:11.869739 containerd[1715]: time="2025-01-30T13:08:11.869639296Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:11.869739 containerd[1715]: time="2025-01-30T13:08:11.869665396Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:11.869858 kubelet[2603]: I0130 13:08:11.869780 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d" Jan 30 13:08:11.871027 containerd[1715]: time="2025-01-30T13:08:11.870248801Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:11.871027 containerd[1715]: time="2025-01-30T13:08:11.870449602Z" level=info msg="Ensure that sandbox 22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d in task-service has been cleanup successfully" Jan 30 13:08:11.871027 containerd[1715]: time="2025-01-30T13:08:11.870463202Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:11.871027 containerd[1715]: time="2025-01-30T13:08:11.870570703Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:11.871027 containerd[1715]: time="2025-01-30T13:08:11.870586803Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:11.870909 systemd[1]: run-netns-cni\x2d58324120\x2d71ba\x2d9f8b\x2dc9a1\x2db2187b52b125.mount: Deactivated successfully. Jan 30 13:08:11.873023 containerd[1715]: time="2025-01-30T13:08:11.871377509Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:11.873023 containerd[1715]: time="2025-01-30T13:08:11.871399910Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:11.873023 containerd[1715]: time="2025-01-30T13:08:11.872863321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:2,}" Jan 30 13:08:11.874881 containerd[1715]: time="2025-01-30T13:08:11.874304032Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:11.874881 containerd[1715]: time="2025-01-30T13:08:11.874395933Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:11.874881 containerd[1715]: time="2025-01-30T13:08:11.874444333Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:11.875460 containerd[1715]: time="2025-01-30T13:08:11.875434841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:2,}" Jan 30 13:08:11.875490 systemd[1]: run-netns-cni\x2d48a6de3b\x2dba36\x2d6d37\x2dbe39\x2de66c7c6a7bbc.mount: Deactivated successfully. Jan 30 13:08:12.061427 containerd[1715]: time="2025-01-30T13:08:12.061176200Z" level=error msg="Failed to destroy network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.062038 containerd[1715]: time="2025-01-30T13:08:12.061808505Z" level=error msg="encountered an error cleaning up failed sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.062038 containerd[1715]: time="2025-01-30T13:08:12.061898705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.062374 kubelet[2603]: E0130 13:08:12.062324 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.062560 kubelet[2603]: E0130 13:08:12.062390 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:12.062560 kubelet[2603]: E0130 13:08:12.062420 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:12.062560 kubelet[2603]: E0130 13:08:12.062475 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:12.070380 containerd[1715]: time="2025-01-30T13:08:12.070313971Z" level=error msg="Failed to destroy network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.070795 containerd[1715]: time="2025-01-30T13:08:12.070766575Z" level=error msg="encountered an error cleaning up failed sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.070956 containerd[1715]: time="2025-01-30T13:08:12.070929176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.071311 kubelet[2603]: E0130 13:08:12.071276 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:12.071398 kubelet[2603]: E0130 13:08:12.071338 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:12.071398 kubelet[2603]: E0130 13:08:12.071363 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:12.071552 kubelet[2603]: E0130 13:08:12.071417 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:12.639803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5-shm.mount: Deactivated successfully. Jan 30 13:08:12.640158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353-shm.mount: Deactivated successfully. Jan 30 13:08:12.716076 kubelet[2603]: E0130 13:08:12.715989 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:12.873947 kubelet[2603]: I0130 13:08:12.873193 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353" Jan 30 13:08:12.874417 containerd[1715]: time="2025-01-30T13:08:12.873980981Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:12.874417 containerd[1715]: time="2025-01-30T13:08:12.874303684Z" level=info msg="Ensure that sandbox fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353 in task-service has been cleanup successfully" Jan 30 13:08:12.874768 containerd[1715]: time="2025-01-30T13:08:12.874520186Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:12.874768 containerd[1715]: time="2025-01-30T13:08:12.874540786Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.874952289Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.875073690Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.875088090Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.875490793Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.875575794Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:12.877056 containerd[1715]: time="2025-01-30T13:08:12.875589294Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:12.876929 systemd[1]: run-netns-cni\x2da6e39393\x2ddcec\x2d930c\x2d6376\x2da5f88fbb33ef.mount: Deactivated successfully. Jan 30 13:08:12.878092 kubelet[2603]: I0130 13:08:12.877513 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5" Jan 30 13:08:12.878355 containerd[1715]: time="2025-01-30T13:08:12.877804811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:3,}" Jan 30 13:08:12.878480 containerd[1715]: time="2025-01-30T13:08:12.878455617Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:12.878962 containerd[1715]: time="2025-01-30T13:08:12.878737619Z" level=info msg="Ensure that sandbox a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5 in task-service has been cleanup successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.879553225Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.879603626Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880090729Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880211230Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880333531Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880839535Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880969836Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:12.882019 containerd[1715]: time="2025-01-30T13:08:12.880983936Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:12.882452 containerd[1715]: time="2025-01-30T13:08:12.882385447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:3,}" Jan 30 13:08:12.883764 systemd[1]: run-netns-cni\x2d1affe5a4\x2dd7b3\x2d0522\x2d329d\x2d0dcd8c6233cc.mount: Deactivated successfully. Jan 30 13:08:13.716689 kubelet[2603]: E0130 13:08:13.716593 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:13.872409 containerd[1715]: time="2025-01-30T13:08:13.872355199Z" level=error msg="Failed to destroy network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.875147 containerd[1715]: time="2025-01-30T13:08:13.873218207Z" level=error msg="encountered an error cleaning up failed sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.877301 containerd[1715]: time="2025-01-30T13:08:13.875708729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.877414 kubelet[2603]: E0130 13:08:13.875978 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.877414 kubelet[2603]: E0130 13:08:13.876653 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:13.877414 kubelet[2603]: E0130 13:08:13.876683 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:13.876410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76-shm.mount: Deactivated successfully. Jan 30 13:08:13.879233 kubelet[2603]: E0130 13:08:13.876752 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:13.886816 kubelet[2603]: I0130 13:08:13.885391 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76" Jan 30 13:08:13.886974 containerd[1715]: time="2025-01-30T13:08:13.886735628Z" level=error msg="Failed to destroy network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.887132 containerd[1715]: time="2025-01-30T13:08:13.887093931Z" level=error msg="encountered an error cleaning up failed sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.887199 containerd[1715]: time="2025-01-30T13:08:13.887173532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.887353 kubelet[2603]: E0130 13:08:13.887302 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:13.887429 kubelet[2603]: E0130 13:08:13.887357 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:13.887429 kubelet[2603]: E0130 13:08:13.887382 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:13.887516 kubelet[2603]: E0130 13:08:13.887426 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:13.887808 containerd[1715]: time="2025-01-30T13:08:13.887694036Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:13.888124 containerd[1715]: time="2025-01-30T13:08:13.887927438Z" level=info msg="Ensure that sandbox 70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76 in task-service has been cleanup successfully" Jan 30 13:08:13.888124 containerd[1715]: time="2025-01-30T13:08:13.888114940Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:13.888259 containerd[1715]: time="2025-01-30T13:08:13.888131840Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:13.888487 containerd[1715]: time="2025-01-30T13:08:13.888434743Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:13.888551 containerd[1715]: time="2025-01-30T13:08:13.888528444Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:13.888551 containerd[1715]: time="2025-01-30T13:08:13.888542844Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:13.888959 containerd[1715]: time="2025-01-30T13:08:13.888779146Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:13.888959 containerd[1715]: time="2025-01-30T13:08:13.888861047Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:13.888959 containerd[1715]: time="2025-01-30T13:08:13.888873547Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:13.889457 containerd[1715]: time="2025-01-30T13:08:13.889210150Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:13.889457 containerd[1715]: time="2025-01-30T13:08:13.889290750Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:13.889457 containerd[1715]: time="2025-01-30T13:08:13.889302251Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:13.890143 containerd[1715]: time="2025-01-30T13:08:13.889715254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:4,}" Jan 30 13:08:14.001039 containerd[1715]: time="2025-01-30T13:08:14.000800546Z" level=error msg="Failed to destroy network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:14.001241 containerd[1715]: time="2025-01-30T13:08:14.001208950Z" level=error msg="encountered an error cleaning up failed sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:14.001621 containerd[1715]: time="2025-01-30T13:08:14.001288250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:14.002199 kubelet[2603]: E0130 13:08:14.002110 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:14.002291 kubelet[2603]: E0130 13:08:14.002192 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:14.002291 kubelet[2603]: E0130 13:08:14.002231 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:14.002384 kubelet[2603]: E0130 13:08:14.002289 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:14.692233 systemd[1]: run-netns-cni\x2da3ab23c5\x2dc1a5\x2dc6b9\x2d26be\x2d98efeceee1d7.mount: Deactivated successfully. Jan 30 13:08:14.692349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d-shm.mount: Deactivated successfully. Jan 30 13:08:14.701407 kubelet[2603]: E0130 13:08:14.701366 2603 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:14.717912 kubelet[2603]: E0130 13:08:14.717689 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:14.890046 kubelet[2603]: I0130 13:08:14.890015 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d" Jan 30 13:08:14.891302 containerd[1715]: time="2025-01-30T13:08:14.890830092Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:14.891302 containerd[1715]: time="2025-01-30T13:08:14.891161095Z" level=info msg="Ensure that sandbox d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d in task-service has been cleanup successfully" Jan 30 13:08:14.894253 containerd[1715]: time="2025-01-30T13:08:14.894139522Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:14.894253 containerd[1715]: time="2025-01-30T13:08:14.894191222Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:14.894862 containerd[1715]: time="2025-01-30T13:08:14.894765328Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:14.894862 containerd[1715]: time="2025-01-30T13:08:14.894855828Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:14.894978 containerd[1715]: time="2025-01-30T13:08:14.894870529Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:14.895389 systemd[1]: run-netns-cni\x2d1c0fdf67\x2da1c3\x2d4949\x2d28e4\x2d1d7b9b806b37.mount: Deactivated successfully. Jan 30 13:08:14.896210 containerd[1715]: time="2025-01-30T13:08:14.896173340Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:14.896500 containerd[1715]: time="2025-01-30T13:08:14.896441443Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:14.896500 containerd[1715]: time="2025-01-30T13:08:14.896460343Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:14.897954 containerd[1715]: time="2025-01-30T13:08:14.897932456Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:14.898274 containerd[1715]: time="2025-01-30T13:08:14.898087357Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:14.898274 containerd[1715]: time="2025-01-30T13:08:14.898103357Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:14.898889 containerd[1715]: time="2025-01-30T13:08:14.898864464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:4,}" Jan 30 13:08:14.900415 kubelet[2603]: I0130 13:08:14.900389 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886" Jan 30 13:08:14.901427 containerd[1715]: time="2025-01-30T13:08:14.901067584Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:14.901427 containerd[1715]: time="2025-01-30T13:08:14.901305886Z" level=info msg="Ensure that sandbox f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886 in task-service has been cleanup successfully" Jan 30 13:08:14.901788 containerd[1715]: time="2025-01-30T13:08:14.901753990Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:14.901893 containerd[1715]: time="2025-01-30T13:08:14.901877391Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:14.904216 systemd[1]: run-netns-cni\x2d47d8cab1\x2d1634\x2df2db\x2de566\x2dc0fae48f639e.mount: Deactivated successfully. Jan 30 13:08:14.904450 containerd[1715]: time="2025-01-30T13:08:14.904429114Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:14.904983 containerd[1715]: time="2025-01-30T13:08:14.904944618Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:14.905620 containerd[1715]: time="2025-01-30T13:08:14.904966619Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:14.906566 containerd[1715]: time="2025-01-30T13:08:14.906464232Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:14.906857 containerd[1715]: time="2025-01-30T13:08:14.906743235Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:14.906857 containerd[1715]: time="2025-01-30T13:08:14.906763735Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:14.908859 containerd[1715]: time="2025-01-30T13:08:14.908835153Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:14.908946 containerd[1715]: time="2025-01-30T13:08:14.908917354Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:14.908946 containerd[1715]: time="2025-01-30T13:08:14.908931754Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:14.909782 containerd[1715]: time="2025-01-30T13:08:14.909613260Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:14.909782 containerd[1715]: time="2025-01-30T13:08:14.909698461Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:14.909782 containerd[1715]: time="2025-01-30T13:08:14.909711261Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:14.910502 containerd[1715]: time="2025-01-30T13:08:14.910174065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:5,}" Jan 30 13:08:15.072212 containerd[1715]: time="2025-01-30T13:08:15.071940509Z" level=error msg="Failed to destroy network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.074855 containerd[1715]: time="2025-01-30T13:08:15.074648434Z" level=error msg="encountered an error cleaning up failed sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.075097 containerd[1715]: time="2025-01-30T13:08:15.075066937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.076073 kubelet[2603]: E0130 13:08:15.075960 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.076869 kubelet[2603]: E0130 13:08:15.076230 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:15.076869 kubelet[2603]: E0130 13:08:15.076269 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:15.076869 kubelet[2603]: E0130 13:08:15.076332 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:15.090001 containerd[1715]: time="2025-01-30T13:08:15.089945570Z" level=error msg="Failed to destroy network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.090702 containerd[1715]: time="2025-01-30T13:08:15.090670777Z" level=error msg="encountered an error cleaning up failed sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.090890 containerd[1715]: time="2025-01-30T13:08:15.090790878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.091905 kubelet[2603]: E0130 13:08:15.091591 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:15.091905 kubelet[2603]: E0130 13:08:15.091651 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:15.091905 kubelet[2603]: E0130 13:08:15.091679 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:15.092158 kubelet[2603]: E0130 13:08:15.091731 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:15.691488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44-shm.mount: Deactivated successfully. Jan 30 13:08:15.691803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a-shm.mount: Deactivated successfully. Jan 30 13:08:15.718679 kubelet[2603]: E0130 13:08:15.718635 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:15.906037 kubelet[2603]: I0130 13:08:15.905604 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a" Jan 30 13:08:15.906478 containerd[1715]: time="2025-01-30T13:08:15.906143262Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:15.906478 containerd[1715]: time="2025-01-30T13:08:15.906402164Z" level=info msg="Ensure that sandbox 1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a in task-service has been cleanup successfully" Jan 30 13:08:15.908870 systemd[1]: run-netns-cni\x2d41efbd14\x2d93eb\x2d82cf\x2d55b6\x2d784fcd5d0d6c.mount: Deactivated successfully. Jan 30 13:08:15.909929 containerd[1715]: time="2025-01-30T13:08:15.909688087Z" level=info msg="TearDown network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" successfully" Jan 30 13:08:15.909929 containerd[1715]: time="2025-01-30T13:08:15.909712887Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" returns successfully" Jan 30 13:08:15.911187 containerd[1715]: time="2025-01-30T13:08:15.910673994Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:15.911187 containerd[1715]: time="2025-01-30T13:08:15.910769794Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:15.911187 containerd[1715]: time="2025-01-30T13:08:15.910783894Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:15.911348 containerd[1715]: time="2025-01-30T13:08:15.911297498Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:15.912049 containerd[1715]: time="2025-01-30T13:08:15.911390799Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:15.912049 containerd[1715]: time="2025-01-30T13:08:15.911410199Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:15.912774 containerd[1715]: time="2025-01-30T13:08:15.912751908Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:15.912942 containerd[1715]: time="2025-01-30T13:08:15.912926409Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:15.913052 containerd[1715]: time="2025-01-30T13:08:15.913026910Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:15.913494 containerd[1715]: time="2025-01-30T13:08:15.913467313Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:15.913572 containerd[1715]: time="2025-01-30T13:08:15.913553614Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:15.913572 containerd[1715]: time="2025-01-30T13:08:15.913567514Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:15.916021 containerd[1715]: time="2025-01-30T13:08:15.915539327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:5,}" Jan 30 13:08:15.919479 kubelet[2603]: I0130 13:08:15.919455 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44" Jan 30 13:08:15.919948 containerd[1715]: time="2025-01-30T13:08:15.919923658Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:15.920324 containerd[1715]: time="2025-01-30T13:08:15.920300960Z" level=info msg="Ensure that sandbox 0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44 in task-service has been cleanup successfully" Jan 30 13:08:15.921461 containerd[1715]: time="2025-01-30T13:08:15.921435568Z" level=info msg="TearDown network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" successfully" Jan 30 13:08:15.922021 containerd[1715]: time="2025-01-30T13:08:15.921575569Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" returns successfully" Jan 30 13:08:15.923144 containerd[1715]: time="2025-01-30T13:08:15.923118480Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:15.923244 containerd[1715]: time="2025-01-30T13:08:15.923221980Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:15.923294 containerd[1715]: time="2025-01-30T13:08:15.923241480Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:15.923918 systemd[1]: run-netns-cni\x2d0678c423\x2d7909\x2d0588\x2d465a\x2de9c2da52fccb.mount: Deactivated successfully. Jan 30 13:08:15.929592 containerd[1715]: time="2025-01-30T13:08:15.929558924Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:15.929674 containerd[1715]: time="2025-01-30T13:08:15.929648025Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:15.929674 containerd[1715]: time="2025-01-30T13:08:15.929662825Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:15.930195 containerd[1715]: time="2025-01-30T13:08:15.930077228Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:15.930271 containerd[1715]: time="2025-01-30T13:08:15.930227729Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:15.930271 containerd[1715]: time="2025-01-30T13:08:15.930243229Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:15.931951 containerd[1715]: time="2025-01-30T13:08:15.931926840Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:15.932054 containerd[1715]: time="2025-01-30T13:08:15.932030141Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:15.932054 containerd[1715]: time="2025-01-30T13:08:15.932046841Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:15.932657 containerd[1715]: time="2025-01-30T13:08:15.932624945Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:15.932727 containerd[1715]: time="2025-01-30T13:08:15.932714746Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:15.932769 containerd[1715]: time="2025-01-30T13:08:15.932729546Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:15.933488 containerd[1715]: time="2025-01-30T13:08:15.933448851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:6,}" Jan 30 13:08:16.056018 containerd[1715]: time="2025-01-30T13:08:16.055588395Z" level=error msg="Failed to destroy network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.056018 containerd[1715]: time="2025-01-30T13:08:16.055929297Z" level=error msg="encountered an error cleaning up failed sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.056201 containerd[1715]: time="2025-01-30T13:08:16.056026398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.056293 kubelet[2603]: E0130 13:08:16.056250 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.056351 kubelet[2603]: E0130 13:08:16.056311 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:16.056351 kubelet[2603]: E0130 13:08:16.056337 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:16.056434 kubelet[2603]: E0130 13:08:16.056390 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:16.104412 containerd[1715]: time="2025-01-30T13:08:16.104186530Z" level=error msg="Failed to destroy network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.104569 containerd[1715]: time="2025-01-30T13:08:16.104534633Z" level=error msg="encountered an error cleaning up failed sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.104628 containerd[1715]: time="2025-01-30T13:08:16.104603633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.105084 kubelet[2603]: E0130 13:08:16.104836 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:16.105084 kubelet[2603]: E0130 13:08:16.104907 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:16.105084 kubelet[2603]: E0130 13:08:16.104937 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:16.105266 kubelet[2603]: E0130 13:08:16.104987 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:16.692296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47-shm.mount: Deactivated successfully. Jan 30 13:08:16.719020 kubelet[2603]: E0130 13:08:16.718956 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:16.927254 kubelet[2603]: I0130 13:08:16.927219 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47" Jan 30 13:08:16.928247 containerd[1715]: time="2025-01-30T13:08:16.927851020Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" Jan 30 13:08:16.928247 containerd[1715]: time="2025-01-30T13:08:16.928105221Z" level=info msg="Ensure that sandbox 96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47 in task-service has been cleanup successfully" Jan 30 13:08:16.928925 containerd[1715]: time="2025-01-30T13:08:16.928727326Z" level=info msg="TearDown network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" successfully" Jan 30 13:08:16.928925 containerd[1715]: time="2025-01-30T13:08:16.928751226Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" returns successfully" Jan 30 13:08:16.931424 containerd[1715]: time="2025-01-30T13:08:16.931157542Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:16.931424 containerd[1715]: time="2025-01-30T13:08:16.931255843Z" level=info msg="TearDown network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" successfully" Jan 30 13:08:16.931424 containerd[1715]: time="2025-01-30T13:08:16.931271343Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" returns successfully" Jan 30 13:08:16.932217 containerd[1715]: time="2025-01-30T13:08:16.932186450Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:16.932312 containerd[1715]: time="2025-01-30T13:08:16.932275350Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:16.932312 containerd[1715]: time="2025-01-30T13:08:16.932289650Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:16.932512 systemd[1]: run-netns-cni\x2d422cc5f5\x2de29f\x2d3461\x2da8f6\x2d66e99c24baf5.mount: Deactivated successfully. Jan 30 13:08:16.933050 containerd[1715]: time="2025-01-30T13:08:16.932970855Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:16.935230 containerd[1715]: time="2025-01-30T13:08:16.935201970Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:16.935230 containerd[1715]: time="2025-01-30T13:08:16.935223071Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:16.936178 containerd[1715]: time="2025-01-30T13:08:16.935724474Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:16.936178 containerd[1715]: time="2025-01-30T13:08:16.935808175Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:16.936178 containerd[1715]: time="2025-01-30T13:08:16.935823075Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:16.936477 containerd[1715]: time="2025-01-30T13:08:16.936452179Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:16.936566 containerd[1715]: time="2025-01-30T13:08:16.936545480Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:16.936614 containerd[1715]: time="2025-01-30T13:08:16.936563480Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:16.937672 containerd[1715]: time="2025-01-30T13:08:16.937642787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:6,}" Jan 30 13:08:16.938963 kubelet[2603]: I0130 13:08:16.938245 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf" Jan 30 13:08:16.939200 containerd[1715]: time="2025-01-30T13:08:16.939177698Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" Jan 30 13:08:16.939756 containerd[1715]: time="2025-01-30T13:08:16.939731702Z" level=info msg="Ensure that sandbox 1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf in task-service has been cleanup successfully" Jan 30 13:08:16.942160 containerd[1715]: time="2025-01-30T13:08:16.942074118Z" level=info msg="TearDown network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" successfully" Jan 30 13:08:16.942270 containerd[1715]: time="2025-01-30T13:08:16.942254319Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" returns successfully" Jan 30 13:08:16.943476 containerd[1715]: time="2025-01-30T13:08:16.942787623Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:16.943476 containerd[1715]: time="2025-01-30T13:08:16.942880523Z" level=info msg="TearDown network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" successfully" Jan 30 13:08:16.943476 containerd[1715]: time="2025-01-30T13:08:16.942897424Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" returns successfully" Jan 30 13:08:16.944024 systemd[1]: run-netns-cni\x2d06f3e0ef\x2d867f\x2d072f\x2d9348\x2d8340052b7473.mount: Deactivated successfully. Jan 30 13:08:16.946852 containerd[1715]: time="2025-01-30T13:08:16.946821351Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:16.947116 containerd[1715]: time="2025-01-30T13:08:16.947059452Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:16.947116 containerd[1715]: time="2025-01-30T13:08:16.947081252Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:16.947666 containerd[1715]: time="2025-01-30T13:08:16.947535856Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:16.947779 containerd[1715]: time="2025-01-30T13:08:16.947635956Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:16.947936 containerd[1715]: time="2025-01-30T13:08:16.947830058Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:16.948818 containerd[1715]: time="2025-01-30T13:08:16.948755464Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:16.949050 containerd[1715]: time="2025-01-30T13:08:16.949030166Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:16.949168 containerd[1715]: time="2025-01-30T13:08:16.949143067Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:16.952017 containerd[1715]: time="2025-01-30T13:08:16.950196574Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:16.952017 containerd[1715]: time="2025-01-30T13:08:16.950294375Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:16.952017 containerd[1715]: time="2025-01-30T13:08:16.950319075Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:16.953775 containerd[1715]: time="2025-01-30T13:08:16.953748898Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:16.954042 containerd[1715]: time="2025-01-30T13:08:16.954020400Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:16.954282 containerd[1715]: time="2025-01-30T13:08:16.954250302Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:16.957412 containerd[1715]: time="2025-01-30T13:08:16.957384424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:7,}" Jan 30 13:08:17.130274 containerd[1715]: time="2025-01-30T13:08:17.130212617Z" level=error msg="Failed to destroy network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.130606 containerd[1715]: time="2025-01-30T13:08:17.130572820Z" level=error msg="encountered an error cleaning up failed sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.130706 containerd[1715]: time="2025-01-30T13:08:17.130650920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.131548 kubelet[2603]: E0130 13:08:17.131021 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.131548 kubelet[2603]: E0130 13:08:17.131122 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:17.131548 kubelet[2603]: E0130 13:08:17.131171 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pp95d" Jan 30 13:08:17.131754 kubelet[2603]: E0130 13:08:17.131243 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pp95d_calico-system(088dc3c1-e9d0-46ba-ae12-4f7130d43480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pp95d" podUID="088dc3c1-e9d0-46ba-ae12-4f7130d43480" Jan 30 13:08:17.139178 containerd[1715]: time="2025-01-30T13:08:17.139125279Z" level=error msg="Failed to destroy network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.139484 containerd[1715]: time="2025-01-30T13:08:17.139446981Z" level=error msg="encountered an error cleaning up failed sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.139550 containerd[1715]: time="2025-01-30T13:08:17.139529382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.139866 kubelet[2603]: E0130 13:08:17.139824 2603 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:08:17.140113 kubelet[2603]: E0130 13:08:17.140084 2603 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:17.140285 kubelet[2603]: E0130 13:08:17.140220 2603 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-cfjvv" Jan 30 13:08:17.140658 kubelet[2603]: E0130 13:08:17.140397 2603 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-cfjvv_default(ebc4461e-e7a5-4263-90a0-8506b558b6e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-cfjvv" podUID="ebc4461e-e7a5-4263-90a0-8506b558b6e6" Jan 30 13:08:17.427181 containerd[1715]: time="2025-01-30T13:08:17.427122568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:17.430218 containerd[1715]: time="2025-01-30T13:08:17.430016388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:08:17.434189 containerd[1715]: time="2025-01-30T13:08:17.433203110Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:17.437734 containerd[1715]: time="2025-01-30T13:08:17.437671941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:17.438402 containerd[1715]: time="2025-01-30T13:08:17.438204145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.577447671s" Jan 30 13:08:17.438402 containerd[1715]: time="2025-01-30T13:08:17.438246545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:08:17.445264 containerd[1715]: time="2025-01-30T13:08:17.445230193Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:08:17.479526 containerd[1715]: time="2025-01-30T13:08:17.479475430Z" level=info msg="CreateContainer within sandbox \"e565535fb44ac07ffff50275710665ed526087ae7fc619aa514a9a9530b2554f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"74ac2d2226312abbb428d28dc430301e05e7912724e155c84d0b2f9635e27e79\"" Jan 30 13:08:17.480199 containerd[1715]: time="2025-01-30T13:08:17.480020934Z" level=info msg="StartContainer for \"74ac2d2226312abbb428d28dc430301e05e7912724e155c84d0b2f9635e27e79\"" Jan 30 13:08:17.510167 systemd[1]: Started cri-containerd-74ac2d2226312abbb428d28dc430301e05e7912724e155c84d0b2f9635e27e79.scope - libcontainer container 74ac2d2226312abbb428d28dc430301e05e7912724e155c84d0b2f9635e27e79. Jan 30 13:08:17.540719 containerd[1715]: time="2025-01-30T13:08:17.540674753Z" level=info msg="StartContainer for \"74ac2d2226312abbb428d28dc430301e05e7912724e155c84d0b2f9635e27e79\" returns successfully" Jan 30 13:08:17.694695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94-shm.mount: Deactivated successfully. Jan 30 13:08:17.695607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129986346.mount: Deactivated successfully. Jan 30 13:08:17.719683 kubelet[2603]: E0130 13:08:17.719641 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:17.834861 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:08:17.834989 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:08:17.944790 kubelet[2603]: I0130 13:08:17.944750 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f" Jan 30 13:08:17.948112 containerd[1715]: time="2025-01-30T13:08:17.945622450Z" level=info msg="StopPodSandbox for \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\"" Jan 30 13:08:17.948112 containerd[1715]: time="2025-01-30T13:08:17.945865851Z" level=info msg="Ensure that sandbox 18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f in task-service has been cleanup successfully" Jan 30 13:08:17.948744 containerd[1715]: time="2025-01-30T13:08:17.948641570Z" level=info msg="TearDown network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" successfully" Jan 30 13:08:17.948744 containerd[1715]: time="2025-01-30T13:08:17.948667471Z" level=info msg="StopPodSandbox for \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" returns successfully" Jan 30 13:08:17.949601 containerd[1715]: time="2025-01-30T13:08:17.949572277Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" Jan 30 13:08:17.949757 containerd[1715]: time="2025-01-30T13:08:17.949674278Z" level=info msg="TearDown network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" successfully" Jan 30 13:08:17.949757 containerd[1715]: time="2025-01-30T13:08:17.949696078Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" returns successfully" Jan 30 13:08:17.950543 systemd[1]: run-netns-cni\x2d752b06e4\x2ddc67\x2deb77\x2d4d40\x2d43541ec8692f.mount: Deactivated successfully. Jan 30 13:08:17.951155 containerd[1715]: time="2025-01-30T13:08:17.951127888Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:17.951249 containerd[1715]: time="2025-01-30T13:08:17.951227088Z" level=info msg="TearDown network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" successfully" Jan 30 13:08:17.951299 containerd[1715]: time="2025-01-30T13:08:17.951246488Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" returns successfully" Jan 30 13:08:17.954220 containerd[1715]: time="2025-01-30T13:08:17.954154709Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:17.954327 containerd[1715]: time="2025-01-30T13:08:17.954315010Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:17.954376 containerd[1715]: time="2025-01-30T13:08:17.954334210Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:17.954735 containerd[1715]: time="2025-01-30T13:08:17.954710912Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:17.954819 containerd[1715]: time="2025-01-30T13:08:17.954791613Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:17.954819 containerd[1715]: time="2025-01-30T13:08:17.954806913Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:17.955379 containerd[1715]: time="2025-01-30T13:08:17.955336217Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:17.955493 containerd[1715]: time="2025-01-30T13:08:17.955419017Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:17.955493 containerd[1715]: time="2025-01-30T13:08:17.955434217Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:17.955905 containerd[1715]: time="2025-01-30T13:08:17.955884221Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:17.956122 containerd[1715]: time="2025-01-30T13:08:17.955970921Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:17.956612 containerd[1715]: time="2025-01-30T13:08:17.955986421Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:17.958453 containerd[1715]: time="2025-01-30T13:08:17.958423338Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:17.958536 containerd[1715]: time="2025-01-30T13:08:17.958514039Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:17.958536 containerd[1715]: time="2025-01-30T13:08:17.958528839Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:17.965335 containerd[1715]: time="2025-01-30T13:08:17.965249385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:8,}" Jan 30 13:08:17.971056 kubelet[2603]: I0130 13:08:17.970261 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94" Jan 30 13:08:17.973935 containerd[1715]: time="2025-01-30T13:08:17.973896145Z" level=info msg="StopPodSandbox for \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\"" Jan 30 13:08:17.974182 containerd[1715]: time="2025-01-30T13:08:17.974154247Z" level=info msg="Ensure that sandbox a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94 in task-service has been cleanup successfully" Jan 30 13:08:17.978579 systemd[1]: run-netns-cni\x2d76eefc1a\x2daad2\x2d4894\x2dca34\x2d2e2e07adac3c.mount: Deactivated successfully. Jan 30 13:08:17.980974 containerd[1715]: time="2025-01-30T13:08:17.979536584Z" level=info msg="TearDown network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" successfully" Jan 30 13:08:17.980974 containerd[1715]: time="2025-01-30T13:08:17.979566584Z" level=info msg="StopPodSandbox for \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" returns successfully" Jan 30 13:08:17.981593 containerd[1715]: time="2025-01-30T13:08:17.981261996Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" Jan 30 13:08:17.981593 containerd[1715]: time="2025-01-30T13:08:17.981369697Z" level=info msg="TearDown network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" successfully" Jan 30 13:08:17.981593 containerd[1715]: time="2025-01-30T13:08:17.981385797Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" returns successfully" Jan 30 13:08:17.981884 containerd[1715]: time="2025-01-30T13:08:17.981862600Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:17.983037 containerd[1715]: time="2025-01-30T13:08:17.982100202Z" level=info msg="TearDown network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" successfully" Jan 30 13:08:17.983037 containerd[1715]: time="2025-01-30T13:08:17.982120202Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" returns successfully" Jan 30 13:08:17.983677 containerd[1715]: time="2025-01-30T13:08:17.983641712Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:17.983802 containerd[1715]: time="2025-01-30T13:08:17.983772613Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:17.983858 containerd[1715]: time="2025-01-30T13:08:17.983802813Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:17.984232 containerd[1715]: time="2025-01-30T13:08:17.984205716Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:17.986014 containerd[1715]: time="2025-01-30T13:08:17.984333917Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:17.986014 containerd[1715]: time="2025-01-30T13:08:17.985057722Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:17.986598 containerd[1715]: time="2025-01-30T13:08:17.986575732Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:17.986770 containerd[1715]: time="2025-01-30T13:08:17.986752834Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:17.986838 containerd[1715]: time="2025-01-30T13:08:17.986824834Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:17.987234 containerd[1715]: time="2025-01-30T13:08:17.987212737Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:17.987450 containerd[1715]: time="2025-01-30T13:08:17.987394838Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:17.987537 containerd[1715]: time="2025-01-30T13:08:17.987522339Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:17.988171 containerd[1715]: time="2025-01-30T13:08:17.988145843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:7,}" Jan 30 13:08:17.993623 kubelet[2603]: I0130 13:08:17.993559 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nz878" podStartSLOduration=5.995228285 podStartE2EDuration="23.99348308s" podCreationTimestamp="2025-01-30 13:07:54 +0000 UTC" firstStartedPulling="2025-01-30 13:07:59.440838956 +0000 UTC m=+5.668183152" lastFinishedPulling="2025-01-30 13:08:17.439093751 +0000 UTC m=+23.666437947" observedRunningTime="2025-01-30 13:08:17.992343672 +0000 UTC m=+24.219687868" watchObservedRunningTime="2025-01-30 13:08:17.99348308 +0000 UTC m=+24.220827176" Jan 30 13:08:18.236511 systemd-networkd[1511]: cali793e1da4680: Link UP Jan 30 13:08:18.236772 systemd-networkd[1511]: cali793e1da4680: Gained carrier Jan 30 13:08:18.244448 systemd-networkd[1511]: cali4080a464382: Link UP Jan 30 13:08:18.244752 systemd-networkd[1511]: cali4080a464382: Gained carrier Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.096 [INFO][3601] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.108 [INFO][3601] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0 nginx-deployment-85f456d6dd- default ebc4461e-e7a5-4263-90a0-8506b558b6e6 1223 0 2025-01-30 13:08:10 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.27 nginx-deployment-85f456d6dd-cfjvv eth0 default [] [] [kns.default ksa.default.default] cali793e1da4680 [] []}} ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.108 [INFO][3601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.167 [INFO][3615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" HandleID="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Workload="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" HandleID="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Workload="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc2e0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.27", "pod":"nginx-deployment-85f456d6dd-cfjvv", "timestamp":"2025-01-30 13:08:18.167399181 +0000 UTC"}, Hostname:"10.200.4.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3615] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.27' Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.183 [INFO][3615] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.186 [INFO][3615] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.190 [INFO][3615] ipam/ipam.go 489: Trying affinity for 192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.192 [INFO][3615] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.194 [INFO][3615] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.194 [INFO][3615] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.195 [INFO][3615] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98 Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.202 [INFO][3615] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3615] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.1/26] block=192.168.65.0/26 handle="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3615] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.1/26] handle="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" host="10.200.4.27" Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:08:18.253021 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.1/26] IPv6=[] ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" HandleID="k8s-pod-network.3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Workload="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.210 [INFO][3601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"ebc4461e-e7a5-4263-90a0-8506b558b6e6", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-cfjvv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali793e1da4680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.210 [INFO][3601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.1/32] ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.210 [INFO][3601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali793e1da4680 ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.236 [INFO][3601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.239 [INFO][3601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"ebc4461e-e7a5-4263-90a0-8506b558b6e6", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98", Pod:"nginx-deployment-85f456d6dd-cfjvv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali793e1da4680", MAC:"e6:82:41:a2:ec:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:18.254180 containerd[1715]: 2025-01-30 13:08:18.248 [INFO][3601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98" Namespace="default" Pod="nginx-deployment-85f456d6dd-cfjvv" WorkloadEndpoint="10.200.4.27-k8s-nginx--deployment--85f456d6dd--cfjvv-eth0" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.083 [INFO][3590] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.108 [INFO][3590] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.27-k8s-csi--node--driver--pp95d-eth0 csi-node-driver- calico-system 088dc3c1-e9d0-46ba-ae12-4f7130d43480 1153 0 2025-01-30 13:07:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.27 csi-node-driver-pp95d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4080a464382 [] []}} ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.108 [INFO][3590] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.169 [INFO][3614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" HandleID="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Workload="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" HandleID="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Workload="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319610), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.27", "pod":"csi-node-driver-pp95d", "timestamp":"2025-01-30 13:08:18.169624997 +0000 UTC"}, Hostname:"10.200.4.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.181 [INFO][3614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.207 [INFO][3614] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.27' Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.209 [INFO][3614] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.214 [INFO][3614] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.217 [INFO][3614] ipam/ipam.go 489: Trying affinity for 192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.219 [INFO][3614] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.220 [INFO][3614] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.220 [INFO][3614] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.222 [INFO][3614] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49 Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.229 [INFO][3614] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.239 [INFO][3614] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.2/26] block=192.168.65.0/26 handle="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.239 [INFO][3614] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.2/26] handle="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" host="10.200.4.27" Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.239 [INFO][3614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:08:18.262831 containerd[1715]: 2025-01-30 13:08:18.239 [INFO][3614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.2/26] IPv6=[] ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" HandleID="k8s-pod-network.e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Workload="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.241 [INFO][3590] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-csi--node--driver--pp95d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088dc3c1-e9d0-46ba-ae12-4f7130d43480", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"", Pod:"csi-node-driver-pp95d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4080a464382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.241 [INFO][3590] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.2/32] ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.241 [INFO][3590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4080a464382 ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.245 [INFO][3590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.246 [INFO][3590] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-csi--node--driver--pp95d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088dc3c1-e9d0-46ba-ae12-4f7130d43480", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49", Pod:"csi-node-driver-pp95d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4080a464382", MAC:"52:97:08:45:65:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:18.264269 containerd[1715]: 2025-01-30 13:08:18.260 [INFO][3590] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49" Namespace="calico-system" Pod="csi-node-driver-pp95d" WorkloadEndpoint="10.200.4.27-k8s-csi--node--driver--pp95d-eth0" Jan 30 13:08:18.285443 containerd[1715]: time="2025-01-30T13:08:18.285123595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:18.285443 containerd[1715]: time="2025-01-30T13:08:18.285192295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:18.285443 containerd[1715]: time="2025-01-30T13:08:18.285214495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:18.285443 containerd[1715]: time="2025-01-30T13:08:18.285312496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:18.299055 containerd[1715]: time="2025-01-30T13:08:18.298961590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:18.299261 containerd[1715]: time="2025-01-30T13:08:18.299237892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:18.299365 containerd[1715]: time="2025-01-30T13:08:18.299347193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:18.299567 containerd[1715]: time="2025-01-30T13:08:18.299543194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:18.313195 systemd[1]: Started cri-containerd-3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98.scope - libcontainer container 3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98. Jan 30 13:08:18.328230 systemd[1]: Started cri-containerd-e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49.scope - libcontainer container e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49. Jan 30 13:08:18.366069 containerd[1715]: time="2025-01-30T13:08:18.365925653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pp95d,Uid:088dc3c1-e9d0-46ba-ae12-4f7130d43480,Namespace:calico-system,Attempt:8,} returns sandbox id \"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49\"" Jan 30 13:08:18.369031 containerd[1715]: time="2025-01-30T13:08:18.368853973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:08:18.379304 containerd[1715]: time="2025-01-30T13:08:18.379197144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfjvv,Uid:ebc4461e-e7a5-4263-90a0-8506b558b6e6,Namespace:default,Attempt:7,} returns sandbox id \"3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98\"" Jan 30 13:08:18.720118 kubelet[2603]: E0130 13:08:18.720065 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:19.474031 kernel: bpftool[3878]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:08:19.485248 systemd-networkd[1511]: cali4080a464382: Gained IPv6LL Jan 30 13:08:19.720701 kubelet[2603]: E0130 13:08:19.720323 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:19.804232 containerd[1715]: time="2025-01-30T13:08:19.803439782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:19.805609 containerd[1715]: time="2025-01-30T13:08:19.805563497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:08:19.808875 containerd[1715]: time="2025-01-30T13:08:19.808809019Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:19.817190 containerd[1715]: time="2025-01-30T13:08:19.817133977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:19.817923 containerd[1715]: time="2025-01-30T13:08:19.817796681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.448885308s" Jan 30 13:08:19.817923 containerd[1715]: time="2025-01-30T13:08:19.817830882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:08:19.819667 containerd[1715]: time="2025-01-30T13:08:19.819461193Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:08:19.820236 containerd[1715]: time="2025-01-30T13:08:19.820211098Z" level=info msg="CreateContainer within sandbox \"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:08:19.837914 systemd-networkd[1511]: vxlan.calico: Link UP Jan 30 13:08:19.837925 systemd-networkd[1511]: vxlan.calico: Gained carrier Jan 30 13:08:19.862648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649339245.mount: Deactivated successfully. Jan 30 13:08:19.877019 containerd[1715]: time="2025-01-30T13:08:19.876235985Z" level=info msg="CreateContainer within sandbox \"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7a1e9c73c6cfca8ce655b7d8f40556d4b36fc0a9b1a2343720d60db8cd5ed175\"" Jan 30 13:08:19.877769 containerd[1715]: time="2025-01-30T13:08:19.877724095Z" level=info msg="StartContainer for \"7a1e9c73c6cfca8ce655b7d8f40556d4b36fc0a9b1a2343720d60db8cd5ed175\"" Jan 30 13:08:19.942185 systemd[1]: Started cri-containerd-7a1e9c73c6cfca8ce655b7d8f40556d4b36fc0a9b1a2343720d60db8cd5ed175.scope - libcontainer container 7a1e9c73c6cfca8ce655b7d8f40556d4b36fc0a9b1a2343720d60db8cd5ed175. Jan 30 13:08:20.007525 containerd[1715]: time="2025-01-30T13:08:20.007475691Z" level=info msg="StartContainer for \"7a1e9c73c6cfca8ce655b7d8f40556d4b36fc0a9b1a2343720d60db8cd5ed175\" returns successfully" Jan 30 13:08:20.190176 systemd-networkd[1511]: cali793e1da4680: Gained IPv6LL Jan 30 13:08:20.721554 kubelet[2603]: E0130 13:08:20.721497 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:21.469371 systemd-networkd[1511]: vxlan.calico: Gained IPv6LL Jan 30 13:08:21.724089 kubelet[2603]: E0130 13:08:21.722294 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:22.723378 kubelet[2603]: E0130 13:08:22.723315 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:22.886562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095024242.mount: Deactivated successfully. Jan 30 13:08:23.723972 kubelet[2603]: E0130 13:08:23.723928 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:24.289948 containerd[1715]: time="2025-01-30T13:08:24.289888281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:24.291893 containerd[1715]: time="2025-01-30T13:08:24.291827798Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:08:24.297094 containerd[1715]: time="2025-01-30T13:08:24.297030843Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:24.304482 containerd[1715]: time="2025-01-30T13:08:24.304434207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:24.306977 containerd[1715]: time="2025-01-30T13:08:24.306047621Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.486551228s" Jan 30 13:08:24.306977 containerd[1715]: time="2025-01-30T13:08:24.306089421Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:08:24.309856 containerd[1715]: time="2025-01-30T13:08:24.309825454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:08:24.310598 containerd[1715]: time="2025-01-30T13:08:24.310566360Z" level=info msg="CreateContainer within sandbox \"3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:08:24.354292 containerd[1715]: time="2025-01-30T13:08:24.354241738Z" level=info msg="CreateContainer within sandbox \"3b696e188cc8f4a78f20a2f10e68da337d518bf52ee64594c7aae3e2da8a5c98\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fcfd7da15229bfc3cbc52dac10b3d56709e6206317064b32cfb16413bfbfb201\"" Jan 30 13:08:24.355622 containerd[1715]: time="2025-01-30T13:08:24.354771343Z" level=info msg="StartContainer for \"fcfd7da15229bfc3cbc52dac10b3d56709e6206317064b32cfb16413bfbfb201\"" Jan 30 13:08:24.402321 systemd[1]: Started cri-containerd-fcfd7da15229bfc3cbc52dac10b3d56709e6206317064b32cfb16413bfbfb201.scope - libcontainer container fcfd7da15229bfc3cbc52dac10b3d56709e6206317064b32cfb16413bfbfb201. Jan 30 13:08:24.431779 containerd[1715]: time="2025-01-30T13:08:24.431047304Z" level=info msg="StartContainer for \"fcfd7da15229bfc3cbc52dac10b3d56709e6206317064b32cfb16413bfbfb201\" returns successfully" Jan 30 13:08:24.724706 kubelet[2603]: E0130 13:08:24.724590 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:25.036864 kubelet[2603]: I0130 13:08:25.036706 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-cfjvv" podStartSLOduration=9.109129164 podStartE2EDuration="15.036686849s" podCreationTimestamp="2025-01-30 13:08:10 +0000 UTC" firstStartedPulling="2025-01-30 13:08:18.380578054 +0000 UTC m=+24.607922150" lastFinishedPulling="2025-01-30 13:08:24.308135739 +0000 UTC m=+30.535479835" observedRunningTime="2025-01-30 13:08:25.036484147 +0000 UTC m=+31.263828343" watchObservedRunningTime="2025-01-30 13:08:25.036686849 +0000 UTC m=+31.264030945" Jan 30 13:08:25.725147 kubelet[2603]: E0130 13:08:25.725021 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:25.979821 containerd[1715]: time="2025-01-30T13:08:25.979505814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:25.982719 containerd[1715]: time="2025-01-30T13:08:25.982664541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:08:25.987974 containerd[1715]: time="2025-01-30T13:08:25.987911887Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:25.992418 containerd[1715]: time="2025-01-30T13:08:25.992349525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:25.993538 containerd[1715]: time="2025-01-30T13:08:25.993249733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.683213477s" Jan 30 13:08:25.993538 containerd[1715]: time="2025-01-30T13:08:25.993291733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:08:25.995513 containerd[1715]: time="2025-01-30T13:08:25.995479852Z" level=info msg="CreateContainer within sandbox \"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:08:26.037137 containerd[1715]: time="2025-01-30T13:08:26.037093013Z" level=info msg="CreateContainer within sandbox \"e0e00866a8607888963ba73f28ff6cb419ce6d5bec3bb598b48e57d94f646b49\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f\"" Jan 30 13:08:26.037543 containerd[1715]: time="2025-01-30T13:08:26.037502716Z" level=info msg="StartContainer for \"b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f\"" Jan 30 13:08:26.074869 systemd[1]: run-containerd-runc-k8s.io-b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f-runc.tFicyg.mount: Deactivated successfully. Jan 30 13:08:26.092215 systemd[1]: Started cri-containerd-b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f.scope - libcontainer container b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f. Jan 30 13:08:26.124535 containerd[1715]: time="2025-01-30T13:08:26.124305568Z" level=info msg="StartContainer for \"b4fc73f05f7712ab3f3f0860a8db4974090bbea77f22a3baed297084ba05435f\" returns successfully" Jan 30 13:08:26.726123 kubelet[2603]: E0130 13:08:26.726067 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:26.831439 kubelet[2603]: I0130 13:08:26.831403 2603 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:08:26.831730 kubelet[2603]: I0130 13:08:26.831455 2603 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:08:27.047600 kubelet[2603]: I0130 13:08:27.047372 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pp95d" podStartSLOduration=25.421181488 podStartE2EDuration="33.047355462s" podCreationTimestamp="2025-01-30 13:07:54 +0000 UTC" firstStartedPulling="2025-01-30 13:08:18.368045267 +0000 UTC m=+24.595389463" lastFinishedPulling="2025-01-30 13:08:25.994219341 +0000 UTC m=+32.221563437" observedRunningTime="2025-01-30 13:08:27.04716656 +0000 UTC m=+33.274510756" watchObservedRunningTime="2025-01-30 13:08:27.047355462 +0000 UTC m=+33.274699558" Jan 30 13:08:27.726538 kubelet[2603]: E0130 13:08:27.726469 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:28.727170 kubelet[2603]: E0130 13:08:28.727117 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:29.727968 kubelet[2603]: E0130 13:08:29.727911 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:30.728933 kubelet[2603]: E0130 13:08:30.728871 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:31.729283 kubelet[2603]: E0130 13:08:31.729228 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:32.730089 kubelet[2603]: E0130 13:08:32.730029 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:33.730729 kubelet[2603]: E0130 13:08:33.730665 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:34.701099 kubelet[2603]: E0130 13:08:34.700661 2603 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:34.731422 kubelet[2603]: E0130 13:08:34.731368 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:34.741908 kubelet[2603]: I0130 13:08:34.741857 2603 topology_manager.go:215] "Topology Admit Handler" podUID="7505bc8e-a060-4d66-9be0-a1867c790181" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:08:34.748452 systemd[1]: Created slice kubepods-besteffort-pod7505bc8e_a060_4d66_9be0_a1867c790181.slice - libcontainer container kubepods-besteffort-pod7505bc8e_a060_4d66_9be0_a1867c790181.slice. Jan 30 13:08:34.780436 kubelet[2603]: I0130 13:08:34.780389 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7505bc8e-a060-4d66-9be0-a1867c790181-data\") pod \"nfs-server-provisioner-0\" (UID: \"7505bc8e-a060-4d66-9be0-a1867c790181\") " pod="default/nfs-server-provisioner-0" Jan 30 13:08:34.780436 kubelet[2603]: I0130 13:08:34.780444 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6fh\" (UniqueName: \"kubernetes.io/projected/7505bc8e-a060-4d66-9be0-a1867c790181-kube-api-access-ln6fh\") pod \"nfs-server-provisioner-0\" (UID: \"7505bc8e-a060-4d66-9be0-a1867c790181\") " pod="default/nfs-server-provisioner-0" Jan 30 13:08:35.052178 containerd[1715]: time="2025-01-30T13:08:35.052027000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7505bc8e-a060-4d66-9be0-a1867c790181,Namespace:default,Attempt:0,}" Jan 30 13:08:35.205937 systemd-networkd[1511]: cali60e51b789ff: Link UP Jan 30 13:08:35.206186 systemd-networkd[1511]: cali60e51b789ff: Gained carrier Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.139 [INFO][4140] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.27-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7505bc8e-a060-4d66-9be0-a1867c790181 1358 0 2025-01-30 13:08:34 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.27 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.140 [INFO][4140] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.165 [INFO][4150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" HandleID="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Workload="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.175 [INFO][4150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" HandleID="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Workload="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916d0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.27", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 13:08:35.165434455 +0000 UTC"}, Hostname:"10.200.4.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.175 [INFO][4150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.175 [INFO][4150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.175 [INFO][4150] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.27' Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.177 [INFO][4150] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.181 [INFO][4150] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.184 [INFO][4150] ipam/ipam.go 489: Trying affinity for 192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.186 [INFO][4150] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.188 [INFO][4150] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.188 [INFO][4150] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.189 [INFO][4150] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2 Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.194 [INFO][4150] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.201 [INFO][4150] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.3/26] block=192.168.65.0/26 handle="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.201 [INFO][4150] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.3/26] handle="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" host="10.200.4.27" Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.201 [INFO][4150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:08:35.219354 containerd[1715]: 2025-01-30 13:08:35.201 [INFO][4150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.3/26] IPv6=[] ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" HandleID="k8s-pod-network.9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Workload="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.220409 containerd[1715]: 2025-01-30 13:08:35.203 [INFO][4140] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7505bc8e-a060-4d66-9be0-a1867c790181", ResourceVersion:"1358", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.65.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:35.220409 containerd[1715]: 2025-01-30 13:08:35.203 [INFO][4140] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.3/32] ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.220409 containerd[1715]: 2025-01-30 13:08:35.203 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.220409 containerd[1715]: 2025-01-30 13:08:35.205 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.220679 containerd[1715]: 2025-01-30 13:08:35.209 [INFO][4140] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7505bc8e-a060-4d66-9be0-a1867c790181", ResourceVersion:"1358", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.65.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"76:89:89:49:84:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:08:35.220679 containerd[1715]: 2025-01-30 13:08:35.217 [INFO][4140] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.27-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:08:35.247392 containerd[1715]: time="2025-01-30T13:08:35.247170943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:35.248140 containerd[1715]: time="2025-01-30T13:08:35.248063350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:35.248140 containerd[1715]: time="2025-01-30T13:08:35.248101051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:35.248398 containerd[1715]: time="2025-01-30T13:08:35.248348453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:35.274211 systemd[1]: Started cri-containerd-9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2.scope - libcontainer container 9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2. Jan 30 13:08:35.316811 containerd[1715]: time="2025-01-30T13:08:35.316767528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7505bc8e-a060-4d66-9be0-a1867c790181,Namespace:default,Attempt:0,} returns sandbox id \"9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2\"" Jan 30 13:08:35.319070 containerd[1715]: time="2025-01-30T13:08:35.318711945Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:08:35.731799 kubelet[2603]: E0130 13:08:35.731664 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:36.637138 systemd-networkd[1511]: cali60e51b789ff: Gained IPv6LL Jan 30 13:08:36.732010 kubelet[2603]: E0130 13:08:36.731943 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:37.732847 kubelet[2603]: E0130 13:08:37.732776 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:37.999519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477391091.mount: Deactivated successfully. Jan 30 13:08:38.733836 kubelet[2603]: E0130 13:08:38.733794 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:39.735047 kubelet[2603]: E0130 13:08:39.734984 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:40.735353 kubelet[2603]: E0130 13:08:40.735305 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:41.060402 containerd[1715]: time="2025-01-30T13:08:41.060263659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:41.064606 containerd[1715]: time="2025-01-30T13:08:41.064410688Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 30 13:08:41.068254 containerd[1715]: time="2025-01-30T13:08:41.067894013Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:41.072768 containerd[1715]: time="2025-01-30T13:08:41.072736647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:41.073748 containerd[1715]: time="2025-01-30T13:08:41.073714854Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.754967209s" Jan 30 13:08:41.073876 containerd[1715]: time="2025-01-30T13:08:41.073856555Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:08:41.076288 containerd[1715]: time="2025-01-30T13:08:41.076263073Z" level=info msg="CreateContainer within sandbox \"9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:08:41.118691 containerd[1715]: time="2025-01-30T13:08:41.118644974Z" level=info msg="CreateContainer within sandbox \"9b4a78d2733e8e7ff11dada97d32a91ce04d6328a3a99559562a92af3fe3fbd2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"50244b4ceef52119ebb437fd9ab038973c0ee34bb2d1b42c0d94f732defd249e\"" Jan 30 13:08:41.119229 containerd[1715]: time="2025-01-30T13:08:41.119202578Z" level=info msg="StartContainer for \"50244b4ceef52119ebb437fd9ab038973c0ee34bb2d1b42c0d94f732defd249e\"" Jan 30 13:08:41.154159 systemd[1]: Started cri-containerd-50244b4ceef52119ebb437fd9ab038973c0ee34bb2d1b42c0d94f732defd249e.scope - libcontainer container 50244b4ceef52119ebb437fd9ab038973c0ee34bb2d1b42c0d94f732defd249e. Jan 30 13:08:41.184531 containerd[1715]: time="2025-01-30T13:08:41.184484342Z" level=info msg="StartContainer for \"50244b4ceef52119ebb437fd9ab038973c0ee34bb2d1b42c0d94f732defd249e\" returns successfully" Jan 30 13:08:41.735752 kubelet[2603]: E0130 13:08:41.735686 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:42.736690 kubelet[2603]: E0130 13:08:42.736627 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:43.736871 kubelet[2603]: E0130 13:08:43.736817 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:44.737277 kubelet[2603]: E0130 13:08:44.737210 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:45.738358 kubelet[2603]: E0130 13:08:45.738301 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:46.739340 kubelet[2603]: E0130 13:08:46.739284 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:47.740249 kubelet[2603]: E0130 13:08:47.740191 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:48.740975 kubelet[2603]: E0130 13:08:48.740918 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:49.742072 kubelet[2603]: E0130 13:08:49.742020 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:50.742398 kubelet[2603]: E0130 13:08:50.742330 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:51.743335 kubelet[2603]: E0130 13:08:51.743277 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:52.743640 kubelet[2603]: E0130 13:08:52.743585 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:53.744221 kubelet[2603]: E0130 13:08:53.744169 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:54.701615 kubelet[2603]: E0130 13:08:54.701559 2603 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:54.733422 containerd[1715]: time="2025-01-30T13:08:54.733354557Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:54.733885 containerd[1715]: time="2025-01-30T13:08:54.733490158Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:54.733885 containerd[1715]: time="2025-01-30T13:08:54.733546758Z" level=info msg="StopPodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:54.733974 containerd[1715]: time="2025-01-30T13:08:54.733953161Z" level=info msg="RemovePodSandbox for \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:54.734046 containerd[1715]: time="2025-01-30T13:08:54.733981461Z" level=info msg="Forcibly stopping sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\"" Jan 30 13:08:54.734133 containerd[1715]: time="2025-01-30T13:08:54.734083162Z" level=info msg="TearDown network for sandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" successfully" Jan 30 13:08:54.740142 containerd[1715]: time="2025-01-30T13:08:54.740085709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.740254 containerd[1715]: time="2025-01-30T13:08:54.740142509Z" level=info msg="RemovePodSandbox \"c12764f4969d610117b2b5d10c76394f361d273df2838ed72cfd9f520a419462\" returns successfully" Jan 30 13:08:54.740469 containerd[1715]: time="2025-01-30T13:08:54.740437912Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:54.740561 containerd[1715]: time="2025-01-30T13:08:54.740533812Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:54.740561 containerd[1715]: time="2025-01-30T13:08:54.740553713Z" level=info msg="StopPodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:54.742119 containerd[1715]: time="2025-01-30T13:08:54.740893415Z" level=info msg="RemovePodSandbox for \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:54.742119 containerd[1715]: time="2025-01-30T13:08:54.740922915Z" level=info msg="Forcibly stopping sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\"" Jan 30 13:08:54.742119 containerd[1715]: time="2025-01-30T13:08:54.741021816Z" level=info msg="TearDown network for sandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" successfully" Jan 30 13:08:54.744579 kubelet[2603]: E0130 13:08:54.744553 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:54.748813 containerd[1715]: time="2025-01-30T13:08:54.748771177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.748905 containerd[1715]: time="2025-01-30T13:08:54.748818977Z" level=info msg="RemovePodSandbox \"2169617bef73b177c646ee7b033e36382a06a5a188e3472dabcbc956cc356df6\" returns successfully" Jan 30 13:08:54.749180 containerd[1715]: time="2025-01-30T13:08:54.749097279Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:54.749300 containerd[1715]: time="2025-01-30T13:08:54.749189480Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:54.749300 containerd[1715]: time="2025-01-30T13:08:54.749204280Z" level=info msg="StopPodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:54.749563 containerd[1715]: time="2025-01-30T13:08:54.749501082Z" level=info msg="RemovePodSandbox for \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:54.749563 containerd[1715]: time="2025-01-30T13:08:54.749527282Z" level=info msg="Forcibly stopping sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\"" Jan 30 13:08:54.749697 containerd[1715]: time="2025-01-30T13:08:54.749645383Z" level=info msg="TearDown network for sandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" successfully" Jan 30 13:08:54.756271 containerd[1715]: time="2025-01-30T13:08:54.756241035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.756377 containerd[1715]: time="2025-01-30T13:08:54.756280735Z" level=info msg="RemovePodSandbox \"a95152f03cfa64404757476881a6a78c7fe5f71df9494bffcfa93abe93985df5\" returns successfully" Jan 30 13:08:54.756581 containerd[1715]: time="2025-01-30T13:08:54.756550937Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:54.756663 containerd[1715]: time="2025-01-30T13:08:54.756643738Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:54.756711 containerd[1715]: time="2025-01-30T13:08:54.756659738Z" level=info msg="StopPodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:54.757061 containerd[1715]: time="2025-01-30T13:08:54.757035341Z" level=info msg="RemovePodSandbox for \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:54.757141 containerd[1715]: time="2025-01-30T13:08:54.757063741Z" level=info msg="Forcibly stopping sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\"" Jan 30 13:08:54.757195 containerd[1715]: time="2025-01-30T13:08:54.757151442Z" level=info msg="TearDown network for sandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" successfully" Jan 30 13:08:54.763822 containerd[1715]: time="2025-01-30T13:08:54.763795993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.763914 containerd[1715]: time="2025-01-30T13:08:54.763836294Z" level=info msg="RemovePodSandbox \"70b745e914bdea1192bff6ef43d4b849320760e2f0089ebd23e71ad5c6835a76\" returns successfully" Jan 30 13:08:54.764177 containerd[1715]: time="2025-01-30T13:08:54.764139896Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:54.764264 containerd[1715]: time="2025-01-30T13:08:54.764226697Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:54.764313 containerd[1715]: time="2025-01-30T13:08:54.764266697Z" level=info msg="StopPodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:54.764584 containerd[1715]: time="2025-01-30T13:08:54.764552799Z" level=info msg="RemovePodSandbox for \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:54.764584 containerd[1715]: time="2025-01-30T13:08:54.764577200Z" level=info msg="Forcibly stopping sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\"" Jan 30 13:08:54.764695 containerd[1715]: time="2025-01-30T13:08:54.764648700Z" level=info msg="TearDown network for sandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" successfully" Jan 30 13:08:54.771063 containerd[1715]: time="2025-01-30T13:08:54.771027650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.771149 containerd[1715]: time="2025-01-30T13:08:54.771067750Z" level=info msg="RemovePodSandbox \"f97a2aa49b6918c0ac96fe0461f88d6d43db829622e07dfd0e193b5f064c6886\" returns successfully" Jan 30 13:08:54.771529 containerd[1715]: time="2025-01-30T13:08:54.771344752Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:54.771601 containerd[1715]: time="2025-01-30T13:08:54.771544154Z" level=info msg="TearDown network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" successfully" Jan 30 13:08:54.771601 containerd[1715]: time="2025-01-30T13:08:54.771559854Z" level=info msg="StopPodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" returns successfully" Jan 30 13:08:54.771918 containerd[1715]: time="2025-01-30T13:08:54.771893556Z" level=info msg="RemovePodSandbox for \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:54.772034 containerd[1715]: time="2025-01-30T13:08:54.771921957Z" level=info msg="Forcibly stopping sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\"" Jan 30 13:08:54.772034 containerd[1715]: time="2025-01-30T13:08:54.772004857Z" level=info msg="TearDown network for sandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" successfully" Jan 30 13:08:54.780228 containerd[1715]: time="2025-01-30T13:08:54.780198221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.780326 containerd[1715]: time="2025-01-30T13:08:54.780241021Z" level=info msg="RemovePodSandbox \"0e3eacb9ca9add3fa6b50012296805db340ca727c4ac23f701694f553f29cc44\" returns successfully" Jan 30 13:08:54.780656 containerd[1715]: time="2025-01-30T13:08:54.780621424Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" Jan 30 13:08:54.780745 containerd[1715]: time="2025-01-30T13:08:54.780718125Z" level=info msg="TearDown network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" successfully" Jan 30 13:08:54.780745 containerd[1715]: time="2025-01-30T13:08:54.780734825Z" level=info msg="StopPodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" returns successfully" Jan 30 13:08:54.781735 containerd[1715]: time="2025-01-30T13:08:54.781004927Z" level=info msg="RemovePodSandbox for \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" Jan 30 13:08:54.781735 containerd[1715]: time="2025-01-30T13:08:54.781063228Z" level=info msg="Forcibly stopping sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\"" Jan 30 13:08:54.781735 containerd[1715]: time="2025-01-30T13:08:54.781130328Z" level=info msg="TearDown network for sandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" successfully" Jan 30 13:08:54.787616 containerd[1715]: time="2025-01-30T13:08:54.787583779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.787699 containerd[1715]: time="2025-01-30T13:08:54.787625279Z" level=info msg="RemovePodSandbox \"1084f3b473f4754e4a735213cafcdcd128f73fab2641b096c6a2b18be0b0faaf\" returns successfully" Jan 30 13:08:54.788026 containerd[1715]: time="2025-01-30T13:08:54.787957881Z" level=info msg="StopPodSandbox for \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\"" Jan 30 13:08:54.788101 containerd[1715]: time="2025-01-30T13:08:54.788061782Z" level=info msg="TearDown network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" successfully" Jan 30 13:08:54.788101 containerd[1715]: time="2025-01-30T13:08:54.788078482Z" level=info msg="StopPodSandbox for \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" returns successfully" Jan 30 13:08:54.788383 containerd[1715]: time="2025-01-30T13:08:54.788346184Z" level=info msg="RemovePodSandbox for \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\"" Jan 30 13:08:54.788383 containerd[1715]: time="2025-01-30T13:08:54.788375185Z" level=info msg="Forcibly stopping sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\"" Jan 30 13:08:54.788572 containerd[1715]: time="2025-01-30T13:08:54.788475885Z" level=info msg="TearDown network for sandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" successfully" Jan 30 13:08:54.798377 containerd[1715]: time="2025-01-30T13:08:54.798349262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.798471 containerd[1715]: time="2025-01-30T13:08:54.798391663Z" level=info msg="RemovePodSandbox \"18e7e8b33a83b8921f8e44b5d1bbad3ac616484164f6e8be93a9dec0f964243f\" returns successfully" Jan 30 13:08:54.798683 containerd[1715]: time="2025-01-30T13:08:54.798658265Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:54.798757 containerd[1715]: time="2025-01-30T13:08:54.798740065Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:54.798808 containerd[1715]: time="2025-01-30T13:08:54.798754365Z" level=info msg="StopPodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:54.799093 containerd[1715]: time="2025-01-30T13:08:54.799070068Z" level=info msg="RemovePodSandbox for \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:54.799180 containerd[1715]: time="2025-01-30T13:08:54.799093568Z" level=info msg="Forcibly stopping sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\"" Jan 30 13:08:54.799221 containerd[1715]: time="2025-01-30T13:08:54.799163869Z" level=info msg="TearDown network for sandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" successfully" Jan 30 13:08:54.805828 containerd[1715]: time="2025-01-30T13:08:54.805800020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.805937 containerd[1715]: time="2025-01-30T13:08:54.805838721Z" level=info msg="RemovePodSandbox \"6508aff7ee47a935a5fe01d7189b61a4b736929f6498331c173061ebb19bdf77\" returns successfully" Jan 30 13:08:54.806148 containerd[1715]: time="2025-01-30T13:08:54.806125123Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:54.806266 containerd[1715]: time="2025-01-30T13:08:54.806219024Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:54.806266 containerd[1715]: time="2025-01-30T13:08:54.806234224Z" level=info msg="StopPodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:54.806511 containerd[1715]: time="2025-01-30T13:08:54.806492526Z" level=info msg="RemovePodSandbox for \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:54.806577 containerd[1715]: time="2025-01-30T13:08:54.806519326Z" level=info msg="Forcibly stopping sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\"" Jan 30 13:08:54.806621 containerd[1715]: time="2025-01-30T13:08:54.806590726Z" level=info msg="TearDown network for sandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" successfully" Jan 30 13:08:54.813457 containerd[1715]: time="2025-01-30T13:08:54.813428980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.813575 containerd[1715]: time="2025-01-30T13:08:54.813467180Z" level=info msg="RemovePodSandbox \"22d5b546957c7f5b0493e66dee5819f004f4f06742947d1deea7d3b6f1d1370d\" returns successfully" Jan 30 13:08:54.814058 containerd[1715]: time="2025-01-30T13:08:54.814020184Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:54.814377 containerd[1715]: time="2025-01-30T13:08:54.814142285Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:54.814377 containerd[1715]: time="2025-01-30T13:08:54.814161285Z" level=info msg="StopPodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:54.814511 containerd[1715]: time="2025-01-30T13:08:54.814479988Z" level=info msg="RemovePodSandbox for \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:54.814570 containerd[1715]: time="2025-01-30T13:08:54.814507388Z" level=info msg="Forcibly stopping sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\"" Jan 30 13:08:54.814633 containerd[1715]: time="2025-01-30T13:08:54.814596489Z" level=info msg="TearDown network for sandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" successfully" Jan 30 13:08:54.822570 containerd[1715]: time="2025-01-30T13:08:54.822431350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.822570 containerd[1715]: time="2025-01-30T13:08:54.822479850Z" level=info msg="RemovePodSandbox \"fbe4682b1ca8c980a70b85e1fa064664796d51bcdca86db5e1c916b440a67353\" returns successfully" Jan 30 13:08:54.824059 containerd[1715]: time="2025-01-30T13:08:54.823970762Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:54.824161 containerd[1715]: time="2025-01-30T13:08:54.824088363Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:54.824161 containerd[1715]: time="2025-01-30T13:08:54.824107563Z" level=info msg="StopPodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:54.824833 containerd[1715]: time="2025-01-30T13:08:54.824802768Z" level=info msg="RemovePodSandbox for \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:54.824961 containerd[1715]: time="2025-01-30T13:08:54.824939369Z" level=info msg="Forcibly stopping sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\"" Jan 30 13:08:54.825186 containerd[1715]: time="2025-01-30T13:08:54.825149971Z" level=info msg="TearDown network for sandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" successfully" Jan 30 13:08:54.835939 containerd[1715]: time="2025-01-30T13:08:54.835908855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.836064 containerd[1715]: time="2025-01-30T13:08:54.835957755Z" level=info msg="RemovePodSandbox \"d28a24fef52f51723f1c83072d321717fb09bc4b5edc106dd21893068129de7d\" returns successfully" Jan 30 13:08:54.836405 containerd[1715]: time="2025-01-30T13:08:54.836382158Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:54.836509 containerd[1715]: time="2025-01-30T13:08:54.836475959Z" level=info msg="TearDown network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" successfully" Jan 30 13:08:54.836509 containerd[1715]: time="2025-01-30T13:08:54.836492659Z" level=info msg="StopPodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" returns successfully" Jan 30 13:08:54.836889 containerd[1715]: time="2025-01-30T13:08:54.836846762Z" level=info msg="RemovePodSandbox for \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:54.836889 containerd[1715]: time="2025-01-30T13:08:54.836876562Z" level=info msg="Forcibly stopping sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\"" Jan 30 13:08:54.837032 containerd[1715]: time="2025-01-30T13:08:54.836946263Z" level=info msg="TearDown network for sandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" successfully" Jan 30 13:08:54.847212 containerd[1715]: time="2025-01-30T13:08:54.847173742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.847309 containerd[1715]: time="2025-01-30T13:08:54.847213843Z" level=info msg="RemovePodSandbox \"1f435ec8ddb6cd9e57843e66fbad22db989534c13335d5626ac074a98051ea1a\" returns successfully" Jan 30 13:08:54.847584 containerd[1715]: time="2025-01-30T13:08:54.847495645Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" Jan 30 13:08:54.847697 containerd[1715]: time="2025-01-30T13:08:54.847584845Z" level=info msg="TearDown network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" successfully" Jan 30 13:08:54.847697 containerd[1715]: time="2025-01-30T13:08:54.847598646Z" level=info msg="StopPodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" returns successfully" Jan 30 13:08:54.847900 containerd[1715]: time="2025-01-30T13:08:54.847873648Z" level=info msg="RemovePodSandbox for \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" Jan 30 13:08:54.847961 containerd[1715]: time="2025-01-30T13:08:54.847902248Z" level=info msg="Forcibly stopping sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\"" Jan 30 13:08:54.848134 containerd[1715]: time="2025-01-30T13:08:54.847987449Z" level=info msg="TearDown network for sandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" successfully" Jan 30 13:08:54.855758 containerd[1715]: time="2025-01-30T13:08:54.855728509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.855849 containerd[1715]: time="2025-01-30T13:08:54.855786209Z" level=info msg="RemovePodSandbox \"96245364e85dd132ee4993ea7466a9f2f1a2568afbbfa5132d4d42745f88aa47\" returns successfully" Jan 30 13:08:54.856151 containerd[1715]: time="2025-01-30T13:08:54.856119412Z" level=info msg="StopPodSandbox for \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\"" Jan 30 13:08:54.856225 containerd[1715]: time="2025-01-30T13:08:54.856209213Z" level=info msg="TearDown network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" successfully" Jan 30 13:08:54.856270 containerd[1715]: time="2025-01-30T13:08:54.856223513Z" level=info msg="StopPodSandbox for \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" returns successfully" Jan 30 13:08:54.856528 containerd[1715]: time="2025-01-30T13:08:54.856494715Z" level=info msg="RemovePodSandbox for \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\"" Jan 30 13:08:54.856607 containerd[1715]: time="2025-01-30T13:08:54.856527815Z" level=info msg="Forcibly stopping sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\"" Jan 30 13:08:54.856651 containerd[1715]: time="2025-01-30T13:08:54.856601316Z" level=info msg="TearDown network for sandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" successfully" Jan 30 13:08:54.865418 containerd[1715]: time="2025-01-30T13:08:54.865390684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:08:54.865529 containerd[1715]: time="2025-01-30T13:08:54.865432984Z" level=info msg="RemovePodSandbox \"a1ca8e5a1c6d08ee135a3fc632a5097c1d01c7556eb9ae17740d4c5b339c7f94\" returns successfully" Jan 30 13:08:55.745770 kubelet[2603]: E0130 13:08:55.745721 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:56.746803 kubelet[2603]: E0130 13:08:56.746761 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:57.746981 kubelet[2603]: E0130 13:08:57.746919 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:58.747780 kubelet[2603]: E0130 13:08:58.747719 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:08:59.748239 kubelet[2603]: E0130 13:08:59.748171 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:00.748569 kubelet[2603]: E0130 13:09:00.748517 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:01.748988 kubelet[2603]: E0130 13:09:01.748928 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:02.750135 kubelet[2603]: E0130 13:09:02.750078 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:03.750764 kubelet[2603]: E0130 13:09:03.750692 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:04.751236 kubelet[2603]: E0130 13:09:04.751169 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:05.752361 kubelet[2603]: E0130 13:09:05.752304 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:06.538510 kubelet[2603]: I0130 13:09:06.538371 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.781716199999998 podStartE2EDuration="32.538311722s" podCreationTimestamp="2025-01-30 13:08:34 +0000 UTC" firstStartedPulling="2025-01-30 13:08:35.318263941 +0000 UTC m=+41.545608037" lastFinishedPulling="2025-01-30 13:08:41.074859463 +0000 UTC m=+47.302203559" observedRunningTime="2025-01-30 13:08:42.081176017 +0000 UTC m=+48.308520113" watchObservedRunningTime="2025-01-30 13:09:06.538311722 +0000 UTC m=+72.765655918" Jan 30 13:09:06.541778 kubelet[2603]: I0130 13:09:06.541741 2603 topology_manager.go:215] "Topology Admit Handler" podUID="f14eaef0-21c9-4e38-a381-f5e13e5f0562" podNamespace="default" podName="test-pod-1" Jan 30 13:09:06.548087 systemd[1]: Created slice kubepods-besteffort-podf14eaef0_21c9_4e38_a381_f5e13e5f0562.slice - libcontainer container kubepods-besteffort-podf14eaef0_21c9_4e38_a381_f5e13e5f0562.slice. Jan 30 13:09:06.581282 kubelet[2603]: I0130 13:09:06.581203 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmvf2\" (UniqueName: \"kubernetes.io/projected/f14eaef0-21c9-4e38-a381-f5e13e5f0562-kube-api-access-jmvf2\") pod \"test-pod-1\" (UID: \"f14eaef0-21c9-4e38-a381-f5e13e5f0562\") " pod="default/test-pod-1" Jan 30 13:09:06.581282 kubelet[2603]: I0130 13:09:06.581253 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-456f9ba6-61b8-4d75-906e-d2ca3bd84916\" (UniqueName: \"kubernetes.io/nfs/f14eaef0-21c9-4e38-a381-f5e13e5f0562-pvc-456f9ba6-61b8-4d75-906e-d2ca3bd84916\") pod \"test-pod-1\" (UID: \"f14eaef0-21c9-4e38-a381-f5e13e5f0562\") " pod="default/test-pod-1" Jan 30 13:09:06.753481 kubelet[2603]: E0130 13:09:06.753424 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:06.777035 kernel: FS-Cache: Loaded Jan 30 13:09:06.889856 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:09:06.890009 kernel: RPC: Registered udp transport module. Jan 30 13:09:06.890054 kernel: RPC: Registered tcp transport module. Jan 30 13:09:06.890078 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:09:06.890099 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:09:07.215259 kernel: NFS: Registering the id_resolver key type Jan 30 13:09:07.215393 kernel: Key type id_resolver registered Jan 30 13:09:07.215413 kernel: Key type id_legacy registered Jan 30 13:09:07.347326 nfsidmap[4374]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-551420da85' Jan 30 13:09:07.406842 nfsidmap[4375]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-551420da85' Jan 30 13:09:07.452265 containerd[1715]: time="2025-01-30T13:09:07.452221328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f14eaef0-21c9-4e38-a381-f5e13e5f0562,Namespace:default,Attempt:0,}" Jan 30 13:09:07.595133 systemd-networkd[1511]: cali5ec59c6bf6e: Link UP Jan 30 13:09:07.596428 systemd-networkd[1511]: cali5ec59c6bf6e: Gained carrier Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.528 [INFO][4377] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.27-k8s-test--pod--1-eth0 default f14eaef0-21c9-4e38-a381-f5e13e5f0562 1460 0 2025-01-30 13:08:36 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.27 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.529 [INFO][4377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.556 [INFO][4387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" HandleID="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Workload="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.566 [INFO][4387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" HandleID="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Workload="10.200.4.27-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267770), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.27", "pod":"test-pod-1", "timestamp":"2025-01-30 13:09:07.556713355 +0000 UTC"}, Hostname:"10.200.4.27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.566 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.566 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.566 [INFO][4387] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.27' Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.568 [INFO][4387] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.572 [INFO][4387] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.575 [INFO][4387] ipam/ipam.go 489: Trying affinity for 192.168.65.0/26 host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.577 [INFO][4387] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.578 [INFO][4387] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.578 [INFO][4387] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.580 [INFO][4387] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.585 [INFO][4387] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.590 [INFO][4387] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.4/26] block=192.168.65.0/26 handle="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.590 [INFO][4387] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.4/26] handle="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" host="10.200.4.27" Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.591 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:09:07.606893 containerd[1715]: 2025-01-30 13:09:07.591 [INFO][4387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.4/26] IPv6=[] ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" HandleID="k8s-pod-network.f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Workload="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.592 [INFO][4377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f14eaef0-21c9-4e38-a381-f5e13e5f0562", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.592 [INFO][4377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.4/32] ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.592 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.596 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.597 [INFO][4377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.27-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f14eaef0-21c9-4e38-a381-f5e13e5f0562", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.27", ContainerID:"f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"06:a0:38:8a:74:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:09:07.607773 containerd[1715]: 2025-01-30 13:09:07.605 [INFO][4377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.27-k8s-test--pod--1-eth0" Jan 30 13:09:07.645479 containerd[1715]: time="2025-01-30T13:09:07.645361541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:07.645823 containerd[1715]: time="2025-01-30T13:09:07.645426142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:07.646132 containerd[1715]: time="2025-01-30T13:09:07.645773945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:07.647557 containerd[1715]: time="2025-01-30T13:09:07.646255249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:07.671215 systemd[1]: Started cri-containerd-f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c.scope - libcontainer container f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c. Jan 30 13:09:07.726064 containerd[1715]: time="2025-01-30T13:09:07.725927256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f14eaef0-21c9-4e38-a381-f5e13e5f0562,Namespace:default,Attempt:0,} returns sandbox id \"f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c\"" Jan 30 13:09:07.727833 containerd[1715]: time="2025-01-30T13:09:07.727793172Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:09:07.754183 kubelet[2603]: E0130 13:09:07.754132 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:08.106520 containerd[1715]: time="2025-01-30T13:09:08.106463831Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:08.110308 containerd[1715]: time="2025-01-30T13:09:08.109709260Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:09:08.114779 containerd[1715]: time="2025-01-30T13:09:08.114657504Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 386.819331ms" Jan 30 13:09:08.114779 containerd[1715]: time="2025-01-30T13:09:08.114777905Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:09:08.119808 containerd[1715]: time="2025-01-30T13:09:08.119768149Z" level=info msg="CreateContainer within sandbox \"f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:09:08.151153 containerd[1715]: time="2025-01-30T13:09:08.151107427Z" level=info msg="CreateContainer within sandbox \"f0a8ce71ccc5308914b896aa269e48675b8dc1a0a31d1c4b6c61930ccd75eb4c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6106108b5c89c8d30c9a7f6585fb6cd87abfe86acb7dfac5f007513239693adc\"" Jan 30 13:09:08.151788 containerd[1715]: time="2025-01-30T13:09:08.151754433Z" level=info msg="StartContainer for \"6106108b5c89c8d30c9a7f6585fb6cd87abfe86acb7dfac5f007513239693adc\"" Jan 30 13:09:08.185160 systemd[1]: Started cri-containerd-6106108b5c89c8d30c9a7f6585fb6cd87abfe86acb7dfac5f007513239693adc.scope - libcontainer container 6106108b5c89c8d30c9a7f6585fb6cd87abfe86acb7dfac5f007513239693adc. Jan 30 13:09:08.220051 containerd[1715]: time="2025-01-30T13:09:08.220009538Z" level=info msg="StartContainer for \"6106108b5c89c8d30c9a7f6585fb6cd87abfe86acb7dfac5f007513239693adc\" returns successfully" Jan 30 13:09:08.754725 kubelet[2603]: E0130 13:09:08.754664 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:09.147361 kubelet[2603]: I0130 13:09:09.147299 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.757871808 podStartE2EDuration="33.147281762s" podCreationTimestamp="2025-01-30 13:08:36 +0000 UTC" firstStartedPulling="2025-01-30 13:09:07.72749497 +0000 UTC m=+73.954839066" lastFinishedPulling="2025-01-30 13:09:08.116904824 +0000 UTC m=+74.344249020" observedRunningTime="2025-01-30 13:09:09.14702426 +0000 UTC m=+75.374368456" watchObservedRunningTime="2025-01-30 13:09:09.147281762 +0000 UTC m=+75.374625958" Jan 30 13:09:09.469222 systemd-networkd[1511]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 13:09:09.755483 kubelet[2603]: E0130 13:09:09.755354 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:10.755817 kubelet[2603]: E0130 13:09:10.755759 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:11.756248 kubelet[2603]: E0130 13:09:11.756183 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:12.756908 kubelet[2603]: E0130 13:09:12.756841 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:13.757797 kubelet[2603]: E0130 13:09:13.757731 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:14.701076 kubelet[2603]: E0130 13:09:14.701017 2603 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:14.758735 kubelet[2603]: E0130 13:09:14.758674 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:15.759089 kubelet[2603]: E0130 13:09:15.759029 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:16.760123 kubelet[2603]: E0130 13:09:16.760064 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:17.760345 kubelet[2603]: E0130 13:09:17.760284 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:18.761454 kubelet[2603]: E0130 13:09:18.761393 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:19.761614 kubelet[2603]: E0130 13:09:19.761552 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:09:20.762729 kubelet[2603]: E0130 13:09:20.762662 2603 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"