Feb 13 16:01:29.107466 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 16:01:29.107533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.107551 kernel: BIOS-provided physical RAM map: Feb 13 16:01:29.107564 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 16:01:29.107578 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 16:01:29.107591 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 16:01:29.107605 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 16:01:29.107617 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 16:01:29.107634 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 16:01:29.107646 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 16:01:29.107661 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 16:01:29.107673 kernel: printk: bootconsole [earlyser0] enabled Feb 13 16:01:29.107688 kernel: NX (Execute Disable) protection: active Feb 13 16:01:29.107703 kernel: APIC: Static calls initialized Feb 13 16:01:29.107729 kernel: efi: EFI v2.7 by Microsoft Feb 13 16:01:29.107746 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Feb 13 16:01:29.107760 kernel: random: crng init done Feb 13 16:01:29.107773 kernel: secureboot: Secure boot disabled Feb 13 16:01:29.107787 kernel: SMBIOS 3.1.0 present. Feb 13 16:01:29.107800 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 16:01:29.107818 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 16:01:29.107831 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 16:01:29.107845 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 16:01:29.107857 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 16:01:29.107875 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 16:01:29.107889 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 16:01:29.107904 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 16:01:29.107919 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 16:01:29.107934 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 16:01:29.107951 kernel: tsc: Detected 2593.906 MHz processor Feb 13 16:01:29.107966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 16:01:29.107981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 16:01:29.107997 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 16:01:29.108016 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 16:01:29.108030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 16:01:29.108046 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 16:01:29.108060 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 16:01:29.108075 kernel: Using GB pages for direct mapping Feb 13 16:01:29.108091 kernel: ACPI: Early table checksum verification disabled Feb 13 16:01:29.108106 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 16:01:29.108129 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108148 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108162 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 16:01:29.108174 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 16:01:29.108340 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108350 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108360 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108373 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108384 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108392 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108400 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108411 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 16:01:29.108418 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 16:01:29.108427 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 16:01:29.108437 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 16:01:29.108444 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 16:01:29.108458 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 16:01:29.108466 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 16:01:29.108474 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 16:01:29.108484 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 16:01:29.108492 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 16:01:29.108501 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 16:01:29.108520 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 16:01:29.108532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 16:01:29.108540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 16:01:29.108553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 16:01:29.108561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 16:01:29.108569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 16:01:29.108580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 16:01:29.108588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 16:01:29.108596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 16:01:29.108606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 16:01:29.108614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 16:01:29.108627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 16:01:29.108636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 16:01:29.108643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 16:01:29.108654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 16:01:29.108662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 16:01:29.108670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 16:01:29.108682 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 16:01:29.108692 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 16:01:29.108701 kernel: Zone ranges: Feb 13 16:01:29.108713 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 16:01:29.108724 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 16:01:29.108733 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 16:01:29.108743 kernel: Movable zone start for each node Feb 13 16:01:29.108753 kernel: Early memory node ranges Feb 13 16:01:29.108762 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 16:01:29.108771 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 16:01:29.108779 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 16:01:29.108790 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 16:01:29.108801 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 16:01:29.108810 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 16:01:29.108819 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 16:01:29.108827 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 16:01:29.108839 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 16:01:29.108847 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 16:01:29.108855 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 16:01:29.108866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 16:01:29.108873 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 16:01:29.108887 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 16:01:29.108895 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 16:01:29.108903 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 16:01:29.108914 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 16:01:29.108922 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 16:01:29.108931 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 16:01:29.108941 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 16:01:29.108948 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 16:01:29.108957 kernel: pcpu-alloc: [0] 0 1 Feb 13 16:01:29.108969 kernel: Hyper-V: PV spinlocks enabled Feb 13 16:01:29.108977 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 16:01:29.108990 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.108998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:01:29.109008 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 16:01:29.109016 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:01:29.109024 kernel: Fallback order for Node 0: 0 Feb 13 16:01:29.109035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 16:01:29.109046 kernel: Policy zone: Normal Feb 13 16:01:29.109066 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:01:29.109074 kernel: software IO TLB: area num 2. Feb 13 16:01:29.109089 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Feb 13 16:01:29.109097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:01:29.109107 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 16:01:29.109116 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 16:01:29.109124 kernel: Dynamic Preempt: voluntary Feb 13 16:01:29.109136 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:01:29.109145 kernel: rcu: RCU event tracing is enabled. Feb 13 16:01:29.109156 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:01:29.109170 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:01:29.109181 kernel: Rude variant of Tasks RCU enabled. Feb 13 16:01:29.109190 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:01:29.109200 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:01:29.109211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:01:29.109219 kernel: Using NULL legacy PIC Feb 13 16:01:29.109232 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 16:01:29.109241 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:01:29.109249 kernel: Console: colour dummy device 80x25 Feb 13 16:01:29.109260 kernel: printk: console [tty1] enabled Feb 13 16:01:29.109269 kernel: printk: console [ttyS0] enabled Feb 13 16:01:29.109278 kernel: printk: bootconsole [earlyser0] disabled Feb 13 16:01:29.109288 kernel: ACPI: Core revision 20230628 Feb 13 16:01:29.109296 kernel: Failed to register legacy timer interrupt Feb 13 16:01:29.109308 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 16:01:29.109319 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 16:01:29.109329 kernel: Hyper-V: Using IPI hypercalls Feb 13 16:01:29.109338 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 16:01:29.109346 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 16:01:29.109358 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 16:01:29.109366 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 16:01:29.109375 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 16:01:29.109385 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 16:01:29.109393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 13 16:01:29.109408 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 16:01:29.109416 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 16:01:29.109425 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 16:01:29.109435 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 16:01:29.109443 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 16:01:29.109453 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 16:01:29.109462 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 16:01:29.109470 kernel: RETBleed: Vulnerable Feb 13 16:01:29.109481 kernel: Speculative Store Bypass: Vulnerable Feb 13 16:01:29.109489 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:01:29.109501 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:01:29.109517 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 16:01:29.109528 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 16:01:29.109537 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 16:01:29.109545 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 16:01:29.109556 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 16:01:29.109564 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 16:01:29.109572 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 16:01:29.109583 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 16:01:29.109592 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 16:01:29.109602 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 16:01:29.109616 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 16:01:29.109626 kernel: Freeing SMP alternatives memory: 32K Feb 13 16:01:29.109635 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:01:29.109645 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:01:29.109656 kernel: landlock: Up and running. Feb 13 16:01:29.109664 kernel: SELinux: Initializing. Feb 13 16:01:29.109673 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.109683 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.109691 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 16:01:29.109701 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109711 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109733 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 16:01:29.109741 kernel: signal: max sigframe size: 3632 Feb 13 16:01:29.109753 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:01:29.109762 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:01:29.109770 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 16:01:29.109781 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:01:29.109789 kernel: smpboot: x86: Booting SMP configuration: Feb 13 16:01:29.109798 kernel: .... node #0, CPUs: #1 Feb 13 16:01:29.109813 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 16:01:29.109823 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 16:01:29.109833 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:01:29.109841 kernel: smpboot: Max logical packages: 1 Feb 13 16:01:29.109852 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 16:01:29.109860 kernel: devtmpfs: initialized Feb 13 16:01:29.109869 kernel: x86/mm: Memory block size: 128MB Feb 13 16:01:29.109880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 16:01:29.109891 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:01:29.109902 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:01:29.109910 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:01:29.109919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:01:29.109929 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:01:29.109937 kernel: audit: type=2000 audit(1739462488.028:1): state=initialized audit_enabled=0 res=1 Feb 13 16:01:29.109948 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:01:29.109956 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 16:01:29.109965 kernel: cpuidle: using governor menu Feb 13 16:01:29.109979 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:01:29.109987 kernel: dca service started, version 1.12.1 Feb 13 16:01:29.109997 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 16:01:29.110007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 16:01:29.110016 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:01:29.110027 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:01:29.110037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:01:29.110047 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:01:29.110060 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:01:29.110074 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:01:29.110083 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:01:29.110093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:01:29.110104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:01:29.110112 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 16:01:29.110122 kernel: ACPI: Interpreter enabled Feb 13 16:01:29.110131 kernel: ACPI: PM: (supports S0 S5) Feb 13 16:01:29.110139 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 16:01:29.110151 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 16:01:29.110163 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 16:01:29.110175 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 16:01:29.110183 kernel: iommu: Default domain type: Translated Feb 13 16:01:29.110192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 16:01:29.110202 kernel: efivars: Registered efivars operations Feb 13 16:01:29.110210 kernel: PCI: Using ACPI for IRQ routing Feb 13 16:01:29.110220 kernel: PCI: System does not support PCI Feb 13 16:01:29.110230 kernel: vgaarb: loaded Feb 13 16:01:29.110238 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 16:01:29.110252 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:01:29.110260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:01:29.110269 kernel: pnp: PnP ACPI init Feb 13 16:01:29.110279 kernel: pnp: PnP ACPI: found 3 devices Feb 13 16:01:29.110287 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 16:01:29.110299 kernel: NET: Registered PF_INET protocol family Feb 13 16:01:29.110307 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 16:01:29.110316 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 16:01:29.110327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:01:29.110339 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:01:29.110349 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 16:01:29.110357 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 16:01:29.110367 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.110377 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.110385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:01:29.110397 kernel: NET: Registered PF_XDP protocol family Feb 13 16:01:29.110405 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:01:29.110414 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 16:01:29.110427 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Feb 13 16:01:29.110436 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 16:01:29.110447 kernel: Initialise system trusted keyrings Feb 13 16:01:29.110455 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 16:01:29.110464 kernel: Key type asymmetric registered Feb 13 16:01:29.110474 kernel: Asymmetric key parser 'x509' registered Feb 13 16:01:29.110486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 16:01:29.110494 kernel: io scheduler mq-deadline registered Feb 13 16:01:29.110505 kernel: io scheduler kyber registered Feb 13 16:01:29.110531 kernel: io scheduler bfq registered Feb 13 16:01:29.110541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 16:01:29.110551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:01:29.110559 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 16:01:29.110571 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 16:01:29.110579 kernel: i8042: PNP: No PS/2 controller found. Feb 13 16:01:29.110740 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 16:01:29.113605 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T16:01:28 UTC (1739462488) Feb 13 16:01:29.113710 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 16:01:29.113723 kernel: intel_pstate: CPU model not supported Feb 13 16:01:29.113732 kernel: efifb: probing for efifb Feb 13 16:01:29.113742 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 16:01:29.113753 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 16:01:29.113761 kernel: efifb: scrolling: redraw Feb 13 16:01:29.113770 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 16:01:29.113781 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:01:29.113789 kernel: fb0: EFI VGA frame buffer device Feb 13 16:01:29.113804 kernel: pstore: Using crash dump compression: deflate Feb 13 16:01:29.113813 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 16:01:29.113823 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:01:29.113833 kernel: Segment Routing with IPv6 Feb 13 16:01:29.113841 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:01:29.113852 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:01:29.113860 kernel: Key type dns_resolver registered Feb 13 16:01:29.113871 kernel: IPI shorthand broadcast: enabled Feb 13 16:01:29.113880 kernel: sched_clock: Marking stable (902003400, 47239900)->(1193787100, -244543800) Feb 13 16:01:29.113895 kernel: registered taskstats version 1 Feb 13 16:01:29.113904 kernel: Loading compiled-in X.509 certificates Feb 13 16:01:29.113913 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 16:01:29.113924 kernel: Key type .fscrypt registered Feb 13 16:01:29.113932 kernel: Key type fscrypt-provisioning registered Feb 13 16:01:29.113941 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:01:29.113952 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:01:29.113960 kernel: ima: No architecture policies found Feb 13 16:01:29.113974 kernel: clk: Disabling unused clocks Feb 13 16:01:29.113982 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 16:01:29.113993 kernel: Write protecting the kernel read-only data: 38912k Feb 13 16:01:29.114002 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 16:01:29.114014 kernel: Run /init as init process Feb 13 16:01:29.114022 kernel: with arguments: Feb 13 16:01:29.114031 kernel: /init Feb 13 16:01:29.114041 kernel: with environment: Feb 13 16:01:29.114049 kernel: HOME=/ Feb 13 16:01:29.114061 kernel: TERM=linux Feb 13 16:01:29.114073 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:01:29.114086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:01:29.114097 systemd[1]: Detected virtualization microsoft. Feb 13 16:01:29.114108 systemd[1]: Detected architecture x86-64. Feb 13 16:01:29.114117 systemd[1]: Running in initrd. Feb 13 16:01:29.114125 systemd[1]: No hostname configured, using default hostname. Feb 13 16:01:29.114137 systemd[1]: Hostname set to . Feb 13 16:01:29.114149 systemd[1]: Initializing machine ID from random generator. Feb 13 16:01:29.114161 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:01:29.114172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:01:29.114182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:01:29.114195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:01:29.114210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:01:29.114233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:01:29.114252 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:01:29.114282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:01:29.114300 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:01:29.114322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:01:29.114339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:01:29.114355 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:01:29.114371 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:01:29.114388 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:01:29.114412 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:01:29.114430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:01:29.114448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:01:29.114469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:01:29.114488 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:01:29.114505 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:01:29.114560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:01:29.114576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:01:29.114592 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:01:29.114616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:01:29.114633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:01:29.114650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:01:29.114667 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:01:29.114684 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:01:29.114706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:01:29.114756 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 16:01:29.114800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:29.114818 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:01:29.114837 systemd-journald[177]: Journal started Feb 13 16:01:29.114875 systemd-journald[177]: Runtime Journal (/run/log/journal/e65247a9225c466b931ba76540bded66) is 8.0M, max 158.8M, 150.8M free. Feb 13 16:01:29.128390 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:01:29.129854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:01:29.137530 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:01:29.140063 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 16:01:29.142148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:29.159681 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:29.167636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:01:29.182706 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:01:29.193247 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:01:29.198660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:01:29.200643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:01:29.221388 kernel: Bridge firewalling registered Feb 13 16:01:29.219894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:01:29.227480 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 16:01:29.230565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:01:29.234945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:01:29.243887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:01:29.257045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:29.265015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:29.275711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:01:29.282680 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:01:29.302541 dracut-cmdline[210]: dracut-dracut-053 Feb 13 16:01:29.307251 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.343928 systemd-resolved[212]: Positive Trust Anchors: Feb 13 16:01:29.343945 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:01:29.344002 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:01:29.354721 systemd-resolved[212]: Defaulting to hostname 'linux'. Feb 13 16:01:29.381268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:01:29.384470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:01:29.399534 kernel: SCSI subsystem initialized Feb 13 16:01:29.411531 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:01:29.422536 kernel: iscsi: registered transport (tcp) Feb 13 16:01:29.444272 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:01:29.444366 kernel: QLogic iSCSI HBA Driver Feb 13 16:01:29.480877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:01:29.487877 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:01:29.519784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:01:29.519915 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:01:29.523369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:01:29.564541 kernel: raid6: avx512x4 gen() 25582 MB/s Feb 13 16:01:29.584528 kernel: raid6: avx512x2 gen() 25403 MB/s Feb 13 16:01:29.604527 kernel: raid6: avx512x1 gen() 25684 MB/s Feb 13 16:01:29.623535 kernel: raid6: avx2x4 gen() 21101 MB/s Feb 13 16:01:29.643525 kernel: raid6: avx2x2 gen() 22940 MB/s Feb 13 16:01:29.664205 kernel: raid6: avx2x1 gen() 20686 MB/s Feb 13 16:01:29.664254 kernel: raid6: using algorithm avx512x1 gen() 25684 MB/s Feb 13 16:01:29.686145 kernel: raid6: .... xor() 26252 MB/s, rmw enabled Feb 13 16:01:29.686186 kernel: raid6: using avx512x2 recovery algorithm Feb 13 16:01:29.709542 kernel: xor: automatically using best checksumming function avx Feb 13 16:01:29.853567 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:01:29.863275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:01:29.873694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:01:29.902283 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 16:01:29.906751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:01:29.922686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:01:29.937349 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 13 16:01:29.967852 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:01:29.976653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:01:30.019436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:01:30.038724 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:01:30.071001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:01:30.075162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:01:30.084102 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:01:30.089944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:01:30.104741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:01:30.124549 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 16:01:30.135767 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:01:30.150032 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 16:01:30.150112 kernel: AES CTR mode by8 optimization enabled Feb 13 16:01:30.151537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:01:30.154756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:30.164985 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:30.171477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:30.171662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.178312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.196570 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 16:01:30.192215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.205782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:30.214668 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 16:01:30.205900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.230639 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 16:01:30.224967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.254736 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 16:01:30.254799 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 16:01:30.256688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.273807 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 16:01:30.277560 kernel: scsi host1: storvsc_host_t Feb 13 16:01:30.280308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:30.287664 kernel: scsi host0: storvsc_host_t Feb 13 16:01:30.287923 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 16:01:30.297284 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 16:01:30.297385 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:01:30.304997 kernel: PTP clock support registered Feb 13 16:01:30.305059 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 16:01:30.311707 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 16:01:30.311767 kernel: hv_vmbus: registering driver hv_utils Feb 13 16:01:30.327562 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 16:01:30.328343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:30.339537 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 16:01:30.339589 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 16:01:30.339605 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 16:01:31.401284 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 16:01:31.397696 systemd-resolved[212]: Clock change detected. Flushing caches. Feb 13 16:01:31.410555 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 16:01:31.426838 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 16:01:31.428629 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 16:01:31.428656 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 16:01:31.442506 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 16:01:31.458883 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 16:01:31.459102 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 16:01:31.459285 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 16:01:31.459449 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 16:01:31.459722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:31.459746 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 16:01:31.528551 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: VF slot 1 added Feb 13 16:01:31.536556 kernel: hv_vmbus: registering driver hv_pci Feb 13 16:01:31.542215 kernel: hv_pci f9a8341e-e31b-40ad-b08c-b9115ea4ad96: PCI VMBus probing: Using version 0x10004 Feb 13 16:01:31.590774 kernel: hv_pci f9a8341e-e31b-40ad-b08c-b9115ea4ad96: PCI host bridge to bus e31b:00 Feb 13 16:01:31.591367 kernel: pci_bus e31b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 16:01:31.591583 kernel: pci_bus e31b:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 16:01:31.591749 kernel: pci e31b:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 16:01:31.591948 kernel: pci e31b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 16:01:31.592136 kernel: pci e31b:00:02.0: enabling Extended Tags Feb 13 16:01:31.592309 kernel: pci e31b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e31b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 16:01:31.592476 kernel: pci_bus e31b:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 16:01:31.593136 kernel: pci e31b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 16:01:31.755500 kernel: mlx5_core e31b:00:02.0: enabling device (0000 -> 0002) Feb 13 16:01:32.030185 kernel: mlx5_core e31b:00:02.0: firmware version: 14.30.5000 Feb 13 16:01:32.030439 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Feb 13 16:01:32.030462 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (444) Feb 13 16:01:32.030482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:32.030503 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: VF registering: eth1 Feb 13 16:01:32.030719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:32.030741 kernel: mlx5_core e31b:00:02.0 eth1: joined to eth0 Feb 13 16:01:32.030932 kernel: mlx5_core e31b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 16:01:31.881358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 16:01:31.910802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 16:01:31.933100 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 16:01:31.979376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 16:01:32.048858 kernel: mlx5_core e31b:00:02.0 enP58139s1: renamed from eth1 Feb 13 16:01:31.982899 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 16:01:31.990692 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:01:33.014289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:33.017596 disk-uuid[600]: The operation has completed successfully. Feb 13 16:01:33.088348 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:01:33.088472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:01:33.120705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:01:33.127817 sh[687]: Success Feb 13 16:01:33.154722 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 16:01:33.311916 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:01:33.324663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:01:33.329714 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:01:33.344551 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 16:01:33.344607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:33.350432 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:01:33.353268 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:01:33.355810 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:01:33.551862 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:01:33.557359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:01:33.566693 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:01:33.575689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:01:33.604308 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:33.604392 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:33.604413 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:33.619560 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:33.635455 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:33.634933 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:01:33.645316 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:01:33.657788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:01:33.667245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:01:33.672691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:01:33.706883 systemd-networkd[871]: lo: Link UP Feb 13 16:01:33.706893 systemd-networkd[871]: lo: Gained carrier Feb 13 16:01:33.709108 systemd-networkd[871]: Enumeration completed Feb 13 16:01:33.709210 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:01:33.726948 systemd[1]: Reached target network.target - Network. Feb 13 16:01:33.727187 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:33.727191 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:01:33.795552 kernel: mlx5_core e31b:00:02.0 enP58139s1: Link up Feb 13 16:01:33.833625 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: Data path switched to VF: enP58139s1 Feb 13 16:01:33.834253 systemd-networkd[871]: enP58139s1: Link UP Feb 13 16:01:33.834401 systemd-networkd[871]: eth0: Link UP Feb 13 16:01:33.834594 systemd-networkd[871]: eth0: Gained carrier Feb 13 16:01:33.834611 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:33.846740 systemd-networkd[871]: enP58139s1: Gained carrier Feb 13 16:01:33.871585 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 16:01:34.387804 ignition[862]: Ignition 2.20.0 Feb 13 16:01:34.387818 ignition[862]: Stage: fetch-offline Feb 13 16:01:34.390842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:01:34.387870 ignition[862]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.387882 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.388008 ignition[862]: parsed url from cmdline: "" Feb 13 16:01:34.388013 ignition[862]: no config URL provided Feb 13 16:01:34.388019 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:01:34.388031 ignition[862]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:01:34.388039 ignition[862]: failed to fetch config: resource requires networking Feb 13 16:01:34.388423 ignition[862]: Ignition finished successfully Feb 13 16:01:34.424756 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:01:34.439077 ignition[879]: Ignition 2.20.0 Feb 13 16:01:34.439089 ignition[879]: Stage: fetch Feb 13 16:01:34.439301 ignition[879]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.439316 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.439421 ignition[879]: parsed url from cmdline: "" Feb 13 16:01:34.439425 ignition[879]: no config URL provided Feb 13 16:01:34.439429 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:01:34.439436 ignition[879]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:01:34.439460 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 16:01:34.521348 ignition[879]: GET result: OK Feb 13 16:01:34.521646 ignition[879]: config has been read from IMDS userdata Feb 13 16:01:34.521667 ignition[879]: parsing config with SHA512: 51636d9f85ec32e8dc3babcc5492919b7fe3e37ffccab6db85ba97d7415018a53ac8987f63a17fe4b929d786242d432e3e792b46c3ee8f137c813b7aa65e90f4 Feb 13 16:01:34.525879 unknown[879]: fetched base config from "system" Feb 13 16:01:34.526443 ignition[879]: fetch: fetch complete Feb 13 16:01:34.525891 unknown[879]: fetched base config from "system" Feb 13 16:01:34.526451 ignition[879]: fetch: fetch passed Feb 13 16:01:34.525898 unknown[879]: fetched user config from "azure" Feb 13 16:01:34.526512 ignition[879]: Ignition finished successfully Feb 13 16:01:34.538467 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:01:34.545731 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:01:34.559842 ignition[885]: Ignition 2.20.0 Feb 13 16:01:34.559854 ignition[885]: Stage: kargs Feb 13 16:01:34.561907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:01:34.560065 ignition[885]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.560079 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.560749 ignition[885]: kargs: kargs passed Feb 13 16:01:34.560791 ignition[885]: Ignition finished successfully Feb 13 16:01:34.574739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:01:34.588900 ignition[891]: Ignition 2.20.0 Feb 13 16:01:34.588910 ignition[891]: Stage: disks Feb 13 16:01:34.590605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:01:34.589120 ignition[891]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.594295 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:01:34.589133 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.598036 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:01:34.589817 ignition[891]: disks: disks passed Feb 13 16:01:34.589861 ignition[891]: Ignition finished successfully Feb 13 16:01:34.616828 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:01:34.619516 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:01:34.624548 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:01:34.641717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:01:34.686065 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 16:01:34.690987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:01:34.701698 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:01:34.796557 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 16:01:34.796779 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:01:34.801588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:01:34.840642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:01:34.847127 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:01:34.856950 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 16:01:34.860680 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (910) Feb 13 16:01:34.866565 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:34.866757 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:01:34.879108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:34.879145 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:34.879163 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:34.868375 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:01:34.888839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:01:34.894906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:01:34.903748 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:01:35.427264 coreos-metadata[912]: Feb 13 16:01:35.427 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 16:01:35.434762 coreos-metadata[912]: Feb 13 16:01:35.434 INFO Fetch successful Feb 13 16:01:35.437360 coreos-metadata[912]: Feb 13 16:01:35.437 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 16:01:35.445874 coreos-metadata[912]: Feb 13 16:01:35.445 INFO Fetch successful Feb 13 16:01:35.458844 coreos-metadata[912]: Feb 13 16:01:35.458 INFO wrote hostname ci-4186.1.1-a-f44757c054 to /sysroot/etc/hostname Feb 13 16:01:35.460973 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:01:35.478052 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:01:35.504567 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:01:35.510030 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:01:35.514930 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:01:35.650753 systemd-networkd[871]: eth0: Gained IPv6LL Feb 13 16:01:35.778789 systemd-networkd[871]: enP58139s1: Gained IPv6LL Feb 13 16:01:36.120768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:01:36.130749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:01:36.137719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:01:36.147365 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:36.145869 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:01:36.175388 ignition[1029]: INFO : Ignition 2.20.0 Feb 13 16:01:36.178880 ignition[1029]: INFO : Stage: mount Feb 13 16:01:36.178880 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:36.178880 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:36.178880 ignition[1029]: INFO : mount: mount passed Feb 13 16:01:36.178880 ignition[1029]: INFO : Ignition finished successfully Feb 13 16:01:36.180227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:01:36.186520 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:01:36.201586 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:01:36.210394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:01:36.235561 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Feb 13 16:01:36.235622 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:36.239544 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:36.244017 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:36.249547 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:36.251076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:01:36.273693 ignition[1057]: INFO : Ignition 2.20.0 Feb 13 16:01:36.273693 ignition[1057]: INFO : Stage: files Feb 13 16:01:36.278000 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:36.278000 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:36.278000 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:01:36.278000 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:01:36.278000 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:01:36.308839 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:01:36.312435 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:01:36.312435 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:01:36.309360 unknown[1057]: wrote ssh authorized keys file for user: core Feb 13 16:01:36.329106 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 16:01:36.911047 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 16:01:37.229789 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: files passed Feb 13 16:01:37.235882 ignition[1057]: INFO : Ignition finished successfully Feb 13 16:01:37.232240 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:01:37.257951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:01:37.266112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:01:37.269512 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:01:37.269646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:01:37.293495 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.293495 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.302435 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.303284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:01:37.310381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:01:37.326718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:01:37.350636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:01:37.350775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:01:37.370383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:01:37.375706 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:01:37.381075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:01:37.382727 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:01:37.401504 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:01:37.410756 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:01:37.423467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:01:37.423614 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:01:37.433467 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:01:37.436623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:01:37.442626 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:01:37.447898 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:01:37.447967 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:01:37.454338 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:01:37.462325 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:01:37.478345 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:01:37.485996 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:01:37.494441 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:01:37.497589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:01:37.500307 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:01:37.501249 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:01:37.502194 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:01:37.502632 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:01:37.503069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:01:37.503147 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:01:37.503995 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:01:37.504383 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:01:37.504822 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:01:37.521930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:01:37.525250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:01:37.525327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:01:37.531375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:01:37.531421 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:01:37.537064 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:01:37.543847 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:01:37.544819 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 16:01:37.544866 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:01:37.597860 ignition[1112]: INFO : Ignition 2.20.0 Feb 13 16:01:37.597860 ignition[1112]: INFO : Stage: umount Feb 13 16:01:37.597860 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:37.597860 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:37.597860 ignition[1112]: INFO : umount: umount passed Feb 13 16:01:37.597860 ignition[1112]: INFO : Ignition finished successfully Feb 13 16:01:37.571024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:01:37.585669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:01:37.588812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:01:37.588877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:01:37.597926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:01:37.597996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:01:37.601894 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:01:37.601992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:01:37.610812 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:01:37.610935 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:01:37.623509 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:01:37.623587 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:01:37.628673 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:01:37.628728 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:01:37.633757 systemd[1]: Stopped target network.target - Network. Feb 13 16:01:37.636041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:01:37.639108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:01:37.642091 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:01:37.646629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:01:37.654877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:01:37.658147 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:01:37.660484 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:01:37.665387 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:01:37.665442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:01:37.668186 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:01:37.668230 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:01:37.675699 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:01:37.675763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:01:37.680748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:01:37.680803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:01:37.686301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:01:37.691342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:01:37.714715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:01:37.715705 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 13 16:01:37.716732 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:01:37.716900 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:01:37.722107 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:01:37.724691 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:01:37.730318 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:01:37.730393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:01:37.741628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:01:37.744555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:01:37.744615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:01:37.748120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:01:37.748172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:37.753701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:01:37.753751 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:01:37.759184 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:01:37.759240 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:01:37.845073 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: Data path switched from VF: enP58139s1 Feb 13 16:01:37.766138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:01:37.772019 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:01:37.772115 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:01:37.786760 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:01:37.786858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:01:37.808910 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:01:37.834201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:01:37.845783 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:01:37.845884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:01:37.851515 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:01:37.853937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:01:37.856789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:01:37.856841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:01:37.887637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:01:37.887731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:01:37.891345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:01:37.891406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:37.913731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:01:37.916545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:01:37.916619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:01:37.926165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:01:37.926232 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:01:37.929545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:01:37.929597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:01:37.936084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:37.936134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:37.943690 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:01:37.943794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:01:37.958347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:01:37.958446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:01:37.965075 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:01:37.980765 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:01:37.990860 systemd[1]: Switching root. Feb 13 16:01:38.055792 systemd-journald[177]: Journal stopped Feb 13 16:01:29.107466 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 16:01:29.107533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.107551 kernel: BIOS-provided physical RAM map: Feb 13 16:01:29.107564 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 16:01:29.107578 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 16:01:29.107591 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 16:01:29.107605 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 16:01:29.107617 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 16:01:29.107634 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 16:01:29.107646 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 16:01:29.107661 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 16:01:29.107673 kernel: printk: bootconsole [earlyser0] enabled Feb 13 16:01:29.107688 kernel: NX (Execute Disable) protection: active Feb 13 16:01:29.107703 kernel: APIC: Static calls initialized Feb 13 16:01:29.107729 kernel: efi: EFI v2.7 by Microsoft Feb 13 16:01:29.107746 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Feb 13 16:01:29.107760 kernel: random: crng init done Feb 13 16:01:29.107773 kernel: secureboot: Secure boot disabled Feb 13 16:01:29.107787 kernel: SMBIOS 3.1.0 present. Feb 13 16:01:29.107800 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 16:01:29.107818 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 16:01:29.107831 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 16:01:29.107845 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 16:01:29.107857 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 16:01:29.107875 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 16:01:29.107889 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 16:01:29.107904 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 16:01:29.107919 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 16:01:29.107934 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 16:01:29.107951 kernel: tsc: Detected 2593.906 MHz processor Feb 13 16:01:29.107966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 16:01:29.107981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 16:01:29.107997 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 16:01:29.108016 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 16:01:29.108030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 16:01:29.108046 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 16:01:29.108060 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 16:01:29.108075 kernel: Using GB pages for direct mapping Feb 13 16:01:29.108091 kernel: ACPI: Early table checksum verification disabled Feb 13 16:01:29.108106 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 16:01:29.108129 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108148 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108162 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 16:01:29.108174 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 16:01:29.108340 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108350 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108360 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108373 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108384 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108392 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108400 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 16:01:29.108411 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 16:01:29.108418 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 16:01:29.108427 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 16:01:29.108437 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 16:01:29.108444 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 16:01:29.108458 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 16:01:29.108466 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 16:01:29.108474 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 16:01:29.108484 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 16:01:29.108492 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 16:01:29.108501 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 16:01:29.108520 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 16:01:29.108532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 16:01:29.108540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 16:01:29.108553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 16:01:29.108561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 16:01:29.108569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 16:01:29.108580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 16:01:29.108588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 16:01:29.108596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 16:01:29.108606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 16:01:29.108614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 16:01:29.108627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 16:01:29.108636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 16:01:29.108643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 16:01:29.108654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 16:01:29.108662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 16:01:29.108670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 16:01:29.108682 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 16:01:29.108692 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 16:01:29.108701 kernel: Zone ranges: Feb 13 16:01:29.108713 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 16:01:29.108724 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 16:01:29.108733 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 16:01:29.108743 kernel: Movable zone start for each node Feb 13 16:01:29.108753 kernel: Early memory node ranges Feb 13 16:01:29.108762 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 16:01:29.108771 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 16:01:29.108779 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 16:01:29.108790 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 16:01:29.108801 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 16:01:29.108810 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 16:01:29.108819 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 16:01:29.108827 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 16:01:29.108839 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 16:01:29.108847 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 16:01:29.108855 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 16:01:29.108866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 16:01:29.108873 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 16:01:29.108887 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 16:01:29.108895 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 16:01:29.108903 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 16:01:29.108914 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 16:01:29.108922 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 16:01:29.108931 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 16:01:29.108941 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 16:01:29.108948 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 16:01:29.108957 kernel: pcpu-alloc: [0] 0 1 Feb 13 16:01:29.108969 kernel: Hyper-V: PV spinlocks enabled Feb 13 16:01:29.108977 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 16:01:29.108990 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.108998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:01:29.109008 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 16:01:29.109016 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:01:29.109024 kernel: Fallback order for Node 0: 0 Feb 13 16:01:29.109035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 16:01:29.109046 kernel: Policy zone: Normal Feb 13 16:01:29.109066 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:01:29.109074 kernel: software IO TLB: area num 2. Feb 13 16:01:29.109089 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Feb 13 16:01:29.109097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:01:29.109107 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 16:01:29.109116 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 16:01:29.109124 kernel: Dynamic Preempt: voluntary Feb 13 16:01:29.109136 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:01:29.109145 kernel: rcu: RCU event tracing is enabled. Feb 13 16:01:29.109156 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:01:29.109170 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:01:29.109181 kernel: Rude variant of Tasks RCU enabled. Feb 13 16:01:29.109190 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:01:29.109200 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:01:29.109211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:01:29.109219 kernel: Using NULL legacy PIC Feb 13 16:01:29.109232 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 16:01:29.109241 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:01:29.109249 kernel: Console: colour dummy device 80x25 Feb 13 16:01:29.109260 kernel: printk: console [tty1] enabled Feb 13 16:01:29.109269 kernel: printk: console [ttyS0] enabled Feb 13 16:01:29.109278 kernel: printk: bootconsole [earlyser0] disabled Feb 13 16:01:29.109288 kernel: ACPI: Core revision 20230628 Feb 13 16:01:29.109296 kernel: Failed to register legacy timer interrupt Feb 13 16:01:29.109308 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 16:01:29.109319 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 16:01:29.109329 kernel: Hyper-V: Using IPI hypercalls Feb 13 16:01:29.109338 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 16:01:29.109346 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 16:01:29.109358 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 16:01:29.109366 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 16:01:29.109375 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 16:01:29.109385 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 16:01:29.109393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 13 16:01:29.109408 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 16:01:29.109416 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 16:01:29.109425 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 16:01:29.109435 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 16:01:29.109443 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 16:01:29.109453 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 16:01:29.109462 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 16:01:29.109470 kernel: RETBleed: Vulnerable Feb 13 16:01:29.109481 kernel: Speculative Store Bypass: Vulnerable Feb 13 16:01:29.109489 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:01:29.109501 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:01:29.109517 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 16:01:29.109528 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 16:01:29.109537 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 16:01:29.109545 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 16:01:29.109556 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 16:01:29.109564 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 16:01:29.109572 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 16:01:29.109583 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 16:01:29.109592 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 16:01:29.109602 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 16:01:29.109616 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 16:01:29.109626 kernel: Freeing SMP alternatives memory: 32K Feb 13 16:01:29.109635 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:01:29.109645 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:01:29.109656 kernel: landlock: Up and running. Feb 13 16:01:29.109664 kernel: SELinux: Initializing. Feb 13 16:01:29.109673 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.109683 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.109691 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 16:01:29.109701 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109711 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:01:29.109733 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 16:01:29.109741 kernel: signal: max sigframe size: 3632 Feb 13 16:01:29.109753 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:01:29.109762 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:01:29.109770 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 16:01:29.109781 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:01:29.109789 kernel: smpboot: x86: Booting SMP configuration: Feb 13 16:01:29.109798 kernel: .... node #0, CPUs: #1 Feb 13 16:01:29.109813 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 16:01:29.109823 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 16:01:29.109833 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:01:29.109841 kernel: smpboot: Max logical packages: 1 Feb 13 16:01:29.109852 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 16:01:29.109860 kernel: devtmpfs: initialized Feb 13 16:01:29.109869 kernel: x86/mm: Memory block size: 128MB Feb 13 16:01:29.109880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 16:01:29.109891 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:01:29.109902 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:01:29.109910 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:01:29.109919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:01:29.109929 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:01:29.109937 kernel: audit: type=2000 audit(1739462488.028:1): state=initialized audit_enabled=0 res=1 Feb 13 16:01:29.109948 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:01:29.109956 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 16:01:29.109965 kernel: cpuidle: using governor menu Feb 13 16:01:29.109979 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:01:29.109987 kernel: dca service started, version 1.12.1 Feb 13 16:01:29.109997 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 16:01:29.110007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 16:01:29.110016 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:01:29.110027 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:01:29.110037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:01:29.110047 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:01:29.110060 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:01:29.110074 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:01:29.110083 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:01:29.110093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:01:29.110104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:01:29.110112 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 16:01:29.110122 kernel: ACPI: Interpreter enabled Feb 13 16:01:29.110131 kernel: ACPI: PM: (supports S0 S5) Feb 13 16:01:29.110139 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 16:01:29.110151 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 16:01:29.110163 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 16:01:29.110175 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 16:01:29.110183 kernel: iommu: Default domain type: Translated Feb 13 16:01:29.110192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 16:01:29.110202 kernel: efivars: Registered efivars operations Feb 13 16:01:29.110210 kernel: PCI: Using ACPI for IRQ routing Feb 13 16:01:29.110220 kernel: PCI: System does not support PCI Feb 13 16:01:29.110230 kernel: vgaarb: loaded Feb 13 16:01:29.110238 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 16:01:29.110252 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:01:29.110260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:01:29.110269 kernel: pnp: PnP ACPI init Feb 13 16:01:29.110279 kernel: pnp: PnP ACPI: found 3 devices Feb 13 16:01:29.110287 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 16:01:29.110299 kernel: NET: Registered PF_INET protocol family Feb 13 16:01:29.110307 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 16:01:29.110316 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 16:01:29.110327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:01:29.110339 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:01:29.110349 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 16:01:29.110357 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 16:01:29.110367 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.110377 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 16:01:29.110385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:01:29.110397 kernel: NET: Registered PF_XDP protocol family Feb 13 16:01:29.110405 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:01:29.110414 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 16:01:29.110427 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Feb 13 16:01:29.110436 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 16:01:29.110447 kernel: Initialise system trusted keyrings Feb 13 16:01:29.110455 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 16:01:29.110464 kernel: Key type asymmetric registered Feb 13 16:01:29.110474 kernel: Asymmetric key parser 'x509' registered Feb 13 16:01:29.110486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 16:01:29.110494 kernel: io scheduler mq-deadline registered Feb 13 16:01:29.110505 kernel: io scheduler kyber registered Feb 13 16:01:29.110531 kernel: io scheduler bfq registered Feb 13 16:01:29.110541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 16:01:29.110551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:01:29.110559 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 16:01:29.110571 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 16:01:29.110579 kernel: i8042: PNP: No PS/2 controller found. Feb 13 16:01:29.110740 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 16:01:29.113605 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T16:01:28 UTC (1739462488) Feb 13 16:01:29.113710 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 16:01:29.113723 kernel: intel_pstate: CPU model not supported Feb 13 16:01:29.113732 kernel: efifb: probing for efifb Feb 13 16:01:29.113742 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 16:01:29.113753 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 16:01:29.113761 kernel: efifb: scrolling: redraw Feb 13 16:01:29.113770 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 16:01:29.113781 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:01:29.113789 kernel: fb0: EFI VGA frame buffer device Feb 13 16:01:29.113804 kernel: pstore: Using crash dump compression: deflate Feb 13 16:01:29.113813 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 16:01:29.113823 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:01:29.113833 kernel: Segment Routing with IPv6 Feb 13 16:01:29.113841 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:01:29.113852 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:01:29.113860 kernel: Key type dns_resolver registered Feb 13 16:01:29.113871 kernel: IPI shorthand broadcast: enabled Feb 13 16:01:29.113880 kernel: sched_clock: Marking stable (902003400, 47239900)->(1193787100, -244543800) Feb 13 16:01:29.113895 kernel: registered taskstats version 1 Feb 13 16:01:29.113904 kernel: Loading compiled-in X.509 certificates Feb 13 16:01:29.113913 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 16:01:29.113924 kernel: Key type .fscrypt registered Feb 13 16:01:29.113932 kernel: Key type fscrypt-provisioning registered Feb 13 16:01:29.113941 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:01:29.113952 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:01:29.113960 kernel: ima: No architecture policies found Feb 13 16:01:29.113974 kernel: clk: Disabling unused clocks Feb 13 16:01:29.113982 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 16:01:29.113993 kernel: Write protecting the kernel read-only data: 38912k Feb 13 16:01:29.114002 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 16:01:29.114014 kernel: Run /init as init process Feb 13 16:01:29.114022 kernel: with arguments: Feb 13 16:01:29.114031 kernel: /init Feb 13 16:01:29.114041 kernel: with environment: Feb 13 16:01:29.114049 kernel: HOME=/ Feb 13 16:01:29.114061 kernel: TERM=linux Feb 13 16:01:29.114073 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:01:29.114086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:01:29.114097 systemd[1]: Detected virtualization microsoft. Feb 13 16:01:29.114108 systemd[1]: Detected architecture x86-64. Feb 13 16:01:29.114117 systemd[1]: Running in initrd. Feb 13 16:01:29.114125 systemd[1]: No hostname configured, using default hostname. Feb 13 16:01:29.114137 systemd[1]: Hostname set to . Feb 13 16:01:29.114149 systemd[1]: Initializing machine ID from random generator. Feb 13 16:01:29.114161 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:01:29.114172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:01:29.114182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:01:29.114195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:01:29.114210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:01:29.114233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:01:29.114252 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:01:29.114282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:01:29.114300 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:01:29.114322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:01:29.114339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:01:29.114355 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:01:29.114371 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:01:29.114388 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:01:29.114412 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:01:29.114430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:01:29.114448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:01:29.114469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:01:29.114488 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:01:29.114505 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:01:29.114560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:01:29.114576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:01:29.114592 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:01:29.114616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:01:29.114633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:01:29.114650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:01:29.114667 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:01:29.114684 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:01:29.114706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:01:29.114756 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 16:01:29.114800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:29.114818 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:01:29.114837 systemd-journald[177]: Journal started Feb 13 16:01:29.114875 systemd-journald[177]: Runtime Journal (/run/log/journal/e65247a9225c466b931ba76540bded66) is 8.0M, max 158.8M, 150.8M free. Feb 13 16:01:29.128390 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:01:29.129854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:01:29.137530 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:01:29.140063 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 16:01:29.142148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:29.159681 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:29.167636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:01:29.182706 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:01:29.193247 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:01:29.198660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:01:29.200643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:01:29.221388 kernel: Bridge firewalling registered Feb 13 16:01:29.219894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:01:29.227480 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 16:01:29.230565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:01:29.234945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:01:29.243887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:01:29.257045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:29.265015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:29.275711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:01:29.282680 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:01:29.302541 dracut-cmdline[210]: dracut-dracut-053 Feb 13 16:01:29.307251 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 16:01:29.343928 systemd-resolved[212]: Positive Trust Anchors: Feb 13 16:01:29.343945 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:01:29.344002 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:01:29.354721 systemd-resolved[212]: Defaulting to hostname 'linux'. Feb 13 16:01:29.381268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:01:29.384470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:01:29.399534 kernel: SCSI subsystem initialized Feb 13 16:01:29.411531 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:01:29.422536 kernel: iscsi: registered transport (tcp) Feb 13 16:01:29.444272 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:01:29.444366 kernel: QLogic iSCSI HBA Driver Feb 13 16:01:29.480877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:01:29.487877 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:01:29.519784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:01:29.519915 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:01:29.523369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:01:29.564541 kernel: raid6: avx512x4 gen() 25582 MB/s Feb 13 16:01:29.584528 kernel: raid6: avx512x2 gen() 25403 MB/s Feb 13 16:01:29.604527 kernel: raid6: avx512x1 gen() 25684 MB/s Feb 13 16:01:29.623535 kernel: raid6: avx2x4 gen() 21101 MB/s Feb 13 16:01:29.643525 kernel: raid6: avx2x2 gen() 22940 MB/s Feb 13 16:01:29.664205 kernel: raid6: avx2x1 gen() 20686 MB/s Feb 13 16:01:29.664254 kernel: raid6: using algorithm avx512x1 gen() 25684 MB/s Feb 13 16:01:29.686145 kernel: raid6: .... xor() 26252 MB/s, rmw enabled Feb 13 16:01:29.686186 kernel: raid6: using avx512x2 recovery algorithm Feb 13 16:01:29.709542 kernel: xor: automatically using best checksumming function avx Feb 13 16:01:29.853567 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:01:29.863275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:01:29.873694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:01:29.902283 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 16:01:29.906751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:01:29.922686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:01:29.937349 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 13 16:01:29.967852 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:01:29.976653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:01:30.019436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:01:30.038724 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:01:30.071001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:01:30.075162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:01:30.084102 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:01:30.089944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:01:30.104741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:01:30.124549 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 16:01:30.135767 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:01:30.150032 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 16:01:30.150112 kernel: AES CTR mode by8 optimization enabled Feb 13 16:01:30.151537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:01:30.154756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:30.164985 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:30.171477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:30.171662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.178312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.196570 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 16:01:30.192215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.205782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:30.214668 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 16:01:30.205900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.230639 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 16:01:30.224967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:30.254736 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 16:01:30.254799 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 16:01:30.256688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:30.273807 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 16:01:30.277560 kernel: scsi host1: storvsc_host_t Feb 13 16:01:30.280308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:01:30.287664 kernel: scsi host0: storvsc_host_t Feb 13 16:01:30.287923 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 16:01:30.297284 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 16:01:30.297385 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:01:30.304997 kernel: PTP clock support registered Feb 13 16:01:30.305059 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 16:01:30.311707 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 16:01:30.311767 kernel: hv_vmbus: registering driver hv_utils Feb 13 16:01:30.327562 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 16:01:30.328343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:30.339537 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 16:01:30.339589 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 16:01:30.339605 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 16:01:31.401284 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 16:01:31.397696 systemd-resolved[212]: Clock change detected. Flushing caches. Feb 13 16:01:31.410555 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 16:01:31.426838 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 16:01:31.428629 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 16:01:31.428656 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 16:01:31.442506 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 16:01:31.458883 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 16:01:31.459102 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 16:01:31.459285 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 16:01:31.459449 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 16:01:31.459722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:31.459746 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 16:01:31.528551 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: VF slot 1 added Feb 13 16:01:31.536556 kernel: hv_vmbus: registering driver hv_pci Feb 13 16:01:31.542215 kernel: hv_pci f9a8341e-e31b-40ad-b08c-b9115ea4ad96: PCI VMBus probing: Using version 0x10004 Feb 13 16:01:31.590774 kernel: hv_pci f9a8341e-e31b-40ad-b08c-b9115ea4ad96: PCI host bridge to bus e31b:00 Feb 13 16:01:31.591367 kernel: pci_bus e31b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 16:01:31.591583 kernel: pci_bus e31b:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 16:01:31.591749 kernel: pci e31b:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 16:01:31.591948 kernel: pci e31b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 16:01:31.592136 kernel: pci e31b:00:02.0: enabling Extended Tags Feb 13 16:01:31.592309 kernel: pci e31b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e31b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 16:01:31.592476 kernel: pci_bus e31b:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 16:01:31.593136 kernel: pci e31b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 16:01:31.755500 kernel: mlx5_core e31b:00:02.0: enabling device (0000 -> 0002) Feb 13 16:01:32.030185 kernel: mlx5_core e31b:00:02.0: firmware version: 14.30.5000 Feb 13 16:01:32.030439 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Feb 13 16:01:32.030462 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (444) Feb 13 16:01:32.030482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:32.030503 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: VF registering: eth1 Feb 13 16:01:32.030719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:32.030741 kernel: mlx5_core e31b:00:02.0 eth1: joined to eth0 Feb 13 16:01:32.030932 kernel: mlx5_core e31b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 16:01:31.881358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 16:01:31.910802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 16:01:31.933100 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 16:01:31.979376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 16:01:32.048858 kernel: mlx5_core e31b:00:02.0 enP58139s1: renamed from eth1 Feb 13 16:01:31.982899 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 16:01:31.990692 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:01:33.014289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 16:01:33.017596 disk-uuid[600]: The operation has completed successfully. Feb 13 16:01:33.088348 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:01:33.088472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:01:33.120705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:01:33.127817 sh[687]: Success Feb 13 16:01:33.154722 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 16:01:33.311916 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:01:33.324663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:01:33.329714 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:01:33.344551 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 16:01:33.344607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:33.350432 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:01:33.353268 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:01:33.355810 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:01:33.551862 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:01:33.557359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:01:33.566693 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:01:33.575689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:01:33.604308 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:33.604392 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:33.604413 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:33.619560 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:33.635455 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:33.634933 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:01:33.645316 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:01:33.657788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:01:33.667245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:01:33.672691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:01:33.706883 systemd-networkd[871]: lo: Link UP Feb 13 16:01:33.706893 systemd-networkd[871]: lo: Gained carrier Feb 13 16:01:33.709108 systemd-networkd[871]: Enumeration completed Feb 13 16:01:33.709210 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:01:33.726948 systemd[1]: Reached target network.target - Network. Feb 13 16:01:33.727187 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:33.727191 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:01:33.795552 kernel: mlx5_core e31b:00:02.0 enP58139s1: Link up Feb 13 16:01:33.833625 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: Data path switched to VF: enP58139s1 Feb 13 16:01:33.834253 systemd-networkd[871]: enP58139s1: Link UP Feb 13 16:01:33.834401 systemd-networkd[871]: eth0: Link UP Feb 13 16:01:33.834594 systemd-networkd[871]: eth0: Gained carrier Feb 13 16:01:33.834611 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:33.846740 systemd-networkd[871]: enP58139s1: Gained carrier Feb 13 16:01:33.871585 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 16:01:34.387804 ignition[862]: Ignition 2.20.0 Feb 13 16:01:34.387818 ignition[862]: Stage: fetch-offline Feb 13 16:01:34.390842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:01:34.387870 ignition[862]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.387882 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.388008 ignition[862]: parsed url from cmdline: "" Feb 13 16:01:34.388013 ignition[862]: no config URL provided Feb 13 16:01:34.388019 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:01:34.388031 ignition[862]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:01:34.388039 ignition[862]: failed to fetch config: resource requires networking Feb 13 16:01:34.388423 ignition[862]: Ignition finished successfully Feb 13 16:01:34.424756 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:01:34.439077 ignition[879]: Ignition 2.20.0 Feb 13 16:01:34.439089 ignition[879]: Stage: fetch Feb 13 16:01:34.439301 ignition[879]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.439316 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.439421 ignition[879]: parsed url from cmdline: "" Feb 13 16:01:34.439425 ignition[879]: no config URL provided Feb 13 16:01:34.439429 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:01:34.439436 ignition[879]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:01:34.439460 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 16:01:34.521348 ignition[879]: GET result: OK Feb 13 16:01:34.521646 ignition[879]: config has been read from IMDS userdata Feb 13 16:01:34.521667 ignition[879]: parsing config with SHA512: 51636d9f85ec32e8dc3babcc5492919b7fe3e37ffccab6db85ba97d7415018a53ac8987f63a17fe4b929d786242d432e3e792b46c3ee8f137c813b7aa65e90f4 Feb 13 16:01:34.525879 unknown[879]: fetched base config from "system" Feb 13 16:01:34.526443 ignition[879]: fetch: fetch complete Feb 13 16:01:34.525891 unknown[879]: fetched base config from "system" Feb 13 16:01:34.526451 ignition[879]: fetch: fetch passed Feb 13 16:01:34.525898 unknown[879]: fetched user config from "azure" Feb 13 16:01:34.526512 ignition[879]: Ignition finished successfully Feb 13 16:01:34.538467 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:01:34.545731 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:01:34.559842 ignition[885]: Ignition 2.20.0 Feb 13 16:01:34.559854 ignition[885]: Stage: kargs Feb 13 16:01:34.561907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:01:34.560065 ignition[885]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.560079 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.560749 ignition[885]: kargs: kargs passed Feb 13 16:01:34.560791 ignition[885]: Ignition finished successfully Feb 13 16:01:34.574739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:01:34.588900 ignition[891]: Ignition 2.20.0 Feb 13 16:01:34.588910 ignition[891]: Stage: disks Feb 13 16:01:34.590605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:01:34.589120 ignition[891]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:34.594295 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:01:34.589133 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:34.598036 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:01:34.589817 ignition[891]: disks: disks passed Feb 13 16:01:34.589861 ignition[891]: Ignition finished successfully Feb 13 16:01:34.616828 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:01:34.619516 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:01:34.624548 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:01:34.641717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:01:34.686065 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 16:01:34.690987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:01:34.701698 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:01:34.796557 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 16:01:34.796779 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:01:34.801588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:01:34.840642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:01:34.847127 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:01:34.856950 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 16:01:34.860680 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (910) Feb 13 16:01:34.866565 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:34.866757 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:01:34.879108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:34.879145 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:34.879163 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:34.868375 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:01:34.888839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:01:34.894906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:01:34.903748 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:01:35.427264 coreos-metadata[912]: Feb 13 16:01:35.427 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 16:01:35.434762 coreos-metadata[912]: Feb 13 16:01:35.434 INFO Fetch successful Feb 13 16:01:35.437360 coreos-metadata[912]: Feb 13 16:01:35.437 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 16:01:35.445874 coreos-metadata[912]: Feb 13 16:01:35.445 INFO Fetch successful Feb 13 16:01:35.458844 coreos-metadata[912]: Feb 13 16:01:35.458 INFO wrote hostname ci-4186.1.1-a-f44757c054 to /sysroot/etc/hostname Feb 13 16:01:35.460973 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:01:35.478052 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:01:35.504567 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:01:35.510030 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:01:35.514930 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:01:35.650753 systemd-networkd[871]: eth0: Gained IPv6LL Feb 13 16:01:35.778789 systemd-networkd[871]: enP58139s1: Gained IPv6LL Feb 13 16:01:36.120768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:01:36.130749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:01:36.137719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:01:36.147365 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:36.145869 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:01:36.175388 ignition[1029]: INFO : Ignition 2.20.0 Feb 13 16:01:36.178880 ignition[1029]: INFO : Stage: mount Feb 13 16:01:36.178880 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:36.178880 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:36.178880 ignition[1029]: INFO : mount: mount passed Feb 13 16:01:36.178880 ignition[1029]: INFO : Ignition finished successfully Feb 13 16:01:36.180227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:01:36.186520 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:01:36.201586 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:01:36.210394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:01:36.235561 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Feb 13 16:01:36.235622 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 16:01:36.239544 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:01:36.244017 kernel: BTRFS info (device sda6): using free space tree Feb 13 16:01:36.249547 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 16:01:36.251076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:01:36.273693 ignition[1057]: INFO : Ignition 2.20.0 Feb 13 16:01:36.273693 ignition[1057]: INFO : Stage: files Feb 13 16:01:36.278000 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:36.278000 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:36.278000 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:01:36.278000 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:01:36.278000 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:01:36.308839 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:01:36.312435 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:01:36.312435 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:01:36.309360 unknown[1057]: wrote ssh authorized keys file for user: core Feb 13 16:01:36.329106 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:36.338458 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 16:01:36.911047 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 16:01:37.229789 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:01:37.235882 ignition[1057]: INFO : files: files passed Feb 13 16:01:37.235882 ignition[1057]: INFO : Ignition finished successfully Feb 13 16:01:37.232240 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:01:37.257951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:01:37.266112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:01:37.269512 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:01:37.269646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:01:37.293495 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.293495 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.302435 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:01:37.303284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:01:37.310381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:01:37.326718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:01:37.350636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:01:37.350775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:01:37.370383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:01:37.375706 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:01:37.381075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:01:37.382727 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:01:37.401504 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:01:37.410756 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:01:37.423467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:01:37.423614 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:01:37.433467 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:01:37.436623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:01:37.442626 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:01:37.447898 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:01:37.447967 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:01:37.454338 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:01:37.462325 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:01:37.478345 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:01:37.485996 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:01:37.494441 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:01:37.497589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:01:37.500307 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:01:37.501249 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:01:37.502194 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:01:37.502632 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:01:37.503069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:01:37.503147 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:01:37.503995 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:01:37.504383 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:01:37.504822 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:01:37.521930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:01:37.525250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:01:37.525327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:01:37.531375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:01:37.531421 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:01:37.537064 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:01:37.543847 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:01:37.544819 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 16:01:37.544866 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:01:37.597860 ignition[1112]: INFO : Ignition 2.20.0 Feb 13 16:01:37.597860 ignition[1112]: INFO : Stage: umount Feb 13 16:01:37.597860 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:01:37.597860 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 16:01:37.597860 ignition[1112]: INFO : umount: umount passed Feb 13 16:01:37.597860 ignition[1112]: INFO : Ignition finished successfully Feb 13 16:01:37.571024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:01:37.585669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:01:37.588812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:01:37.588877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:01:37.597926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:01:37.597996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:01:37.601894 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:01:37.601992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:01:37.610812 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:01:37.610935 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:01:37.623509 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:01:37.623587 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:01:37.628673 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:01:37.628728 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:01:37.633757 systemd[1]: Stopped target network.target - Network. Feb 13 16:01:37.636041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:01:37.639108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:01:37.642091 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:01:37.646629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:01:37.654877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:01:37.658147 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:01:37.660484 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:01:37.665387 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:01:37.665442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:01:37.668186 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:01:37.668230 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:01:37.675699 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:01:37.675763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:01:37.680748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:01:37.680803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:01:37.686301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:01:37.691342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:01:37.714715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:01:37.715705 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 13 16:01:37.716732 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:01:37.716900 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:01:37.722107 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:01:37.724691 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:01:37.730318 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:01:37.730393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:01:37.741628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:01:37.744555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:01:37.744615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:01:37.748120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:01:37.748172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:37.753701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:01:37.753751 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:01:37.759184 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:01:37.759240 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:01:37.845073 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: Data path switched from VF: enP58139s1 Feb 13 16:01:37.766138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:01:37.772019 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:01:37.772115 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:01:37.786760 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:01:37.786858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:01:37.808910 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:01:37.834201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:01:37.845783 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:01:37.845884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:01:37.851515 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:01:37.853937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:01:37.856789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:01:37.856841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:01:37.887637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:01:37.887731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:01:37.891345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:01:37.891406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:01:37.913731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:01:37.916545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:01:37.916619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:01:37.926165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:01:37.926232 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:01:37.929545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:01:37.929597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:01:37.936084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:01:37.936134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:37.943690 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:01:37.943794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:01:37.958347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:01:37.958446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:01:37.965075 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:01:37.980765 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:01:37.990860 systemd[1]: Switching root. Feb 13 16:01:38.055792 systemd-journald[177]: Journal stopped Feb 13 16:01:41.862211 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 16:01:41.862267 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:01:41.862290 kernel: SELinux: policy capability open_perms=1 Feb 13 16:01:41.862308 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:01:41.862325 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:01:41.862343 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:01:41.862363 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:01:41.862391 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:01:41.862406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:01:41.862422 kernel: audit: type=1403 audit(1739462499.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:01:41.862437 systemd[1]: Successfully loaded SELinux policy in 87.107ms. Feb 13 16:01:41.862451 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.263ms. Feb 13 16:01:41.862467 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:01:41.862483 systemd[1]: Detected virtualization microsoft. Feb 13 16:01:41.862502 systemd[1]: Detected architecture x86-64. Feb 13 16:01:41.862517 systemd[1]: Detected first boot. Feb 13 16:01:41.862547 systemd[1]: Hostname set to . Feb 13 16:01:41.862563 systemd[1]: Initializing machine ID from random generator. Feb 13 16:01:41.862577 zram_generator::config[1154]: No configuration found. Feb 13 16:01:41.862597 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:01:41.862613 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:01:41.862628 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:01:41.862645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:01:41.862660 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:01:41.866079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:01:41.866103 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:01:41.866121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:01:41.866134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:01:41.866146 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:01:41.866157 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:01:41.866172 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:01:41.866183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:01:41.866195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:01:41.866206 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:01:41.866221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:01:41.866232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:01:41.866242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:01:41.866252 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:01:41.866262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:01:41.866275 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:01:41.866289 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:01:41.866301 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:01:41.866315 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:01:41.866328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:01:41.866338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:01:41.866351 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:01:41.866361 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:01:41.866374 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:01:41.866384 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:01:41.866397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:01:41.866410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:01:41.866427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:01:41.866439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:01:41.866451 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:01:41.866466 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:01:41.866480 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:01:41.866492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:01:41.866505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:01:41.866516 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:01:41.866577 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:01:41.866590 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:01:41.866604 systemd[1]: Reached target machines.target - Containers. Feb 13 16:01:41.866618 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:01:41.866631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:01:41.866642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:01:41.866655 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:01:41.866665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:01:41.866679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:01:41.866689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:01:41.866702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:01:41.866713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:01:41.866729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:01:41.866740 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:01:41.866753 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:01:41.866764 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:01:41.866777 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:01:41.866787 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:01:41.866800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:01:41.866811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:01:41.866826 kernel: fuse: init (API version 7.39) Feb 13 16:01:41.866836 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:01:41.866873 systemd-journald[1253]: Collecting audit messages is disabled. Feb 13 16:01:41.866899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:01:41.866916 systemd-journald[1253]: Journal started Feb 13 16:01:41.866943 systemd-journald[1253]: Runtime Journal (/run/log/journal/44ec6028e0584f818f24b3c9dc38bb1c) is 8.0M, max 158.8M, 150.8M free. Feb 13 16:01:41.129328 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:01:41.309298 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 16:01:41.309717 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:01:41.877389 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:01:41.877428 systemd[1]: Stopped verity-setup.service. Feb 13 16:01:41.877445 kernel: loop: module loaded Feb 13 16:01:41.887549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:01:41.898005 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:01:41.900188 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:01:41.903076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:01:41.906157 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:01:41.908972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:01:41.912040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:01:41.915636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:01:41.919045 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:01:41.925245 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:01:41.931276 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:01:41.932596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:01:41.936210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:01:41.936376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:01:41.940215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:01:41.940750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:01:41.949914 kernel: ACPI: bus type drm_connector registered Feb 13 16:01:41.945103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:01:41.945356 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:01:41.950983 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:01:41.951257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:01:41.955259 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:01:41.955574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:01:41.959157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:01:41.962995 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:01:41.967091 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:01:41.988189 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:01:42.000984 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:01:42.014953 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:01:42.018449 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:01:42.018492 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:01:42.022434 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:01:42.030689 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:01:42.034812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:01:42.037885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:01:42.039728 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:01:42.058749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:01:42.062033 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:01:42.067692 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:01:42.070788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:01:42.076684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:01:42.077310 systemd-journald[1253]: Time spent on flushing to /var/log/journal/44ec6028e0584f818f24b3c9dc38bb1c is 61.512ms for 938 entries. Feb 13 16:01:42.077310 systemd-journald[1253]: System Journal (/var/log/journal/44ec6028e0584f818f24b3c9dc38bb1c) is 8.0M, max 2.6G, 2.6G free. Feb 13 16:01:42.190706 systemd-journald[1253]: Received client request to flush runtime journal. Feb 13 16:01:42.190762 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 16:01:42.094761 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:01:42.102736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:01:42.109629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:01:42.113369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:01:42.116936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:01:42.120656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:01:42.131351 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:01:42.146659 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:01:42.155748 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:01:42.168784 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:01:42.197124 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:01:42.212360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:42.216956 udevadm[1301]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 16:01:42.228698 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Feb 13 16:01:42.228725 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Feb 13 16:01:42.236360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:01:42.244775 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:01:42.253685 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:01:42.254575 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:01:42.457714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:01:42.470806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:01:42.496048 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Feb 13 16:01:42.496072 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Feb 13 16:01:42.500657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:01:42.517560 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:01:42.536556 kernel: loop1: detected capacity change from 0 to 211296 Feb 13 16:01:42.577582 kernel: loop2: detected capacity change from 0 to 28304 Feb 13 16:01:42.876566 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 16:01:43.279560 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 16:01:43.305697 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:01:43.318846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:01:43.324696 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 16:01:43.335157 kernel: loop6: detected capacity change from 0 to 28304 Feb 13 16:01:43.343712 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 16:01:43.357142 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Feb 13 16:01:43.358825 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 16:01:43.359359 (sd-merge)[1319]: Merged extensions into '/usr'. Feb 13 16:01:43.363727 systemd[1]: Reloading requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:01:43.363747 systemd[1]: Reloading... Feb 13 16:01:43.438668 zram_generator::config[1346]: No configuration found. Feb 13 16:01:43.613436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:01:43.766365 systemd[1]: Reloading finished in 402 ms. Feb 13 16:01:43.804414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:01:43.813631 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:01:43.834201 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:01:43.861887 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 16:01:43.862009 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 16:01:43.862039 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 16:01:43.862064 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 16:01:43.862090 kernel: hv_vmbus: registering driver hv_balloon Feb 13 16:01:43.847448 systemd[1]: Starting ensure-sysext.service... Feb 13 16:01:43.867381 kernel: Console: switching to colour dummy device 80x25 Feb 13 16:01:43.868579 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 16:01:43.868915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:01:43.888570 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:01:43.897887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:01:43.993915 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:01:43.995399 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:01:44.004215 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:01:44.007422 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Feb 13 16:01:44.007854 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Feb 13 16:01:44.030185 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:01:44.030207 systemd-tmpfiles[1443]: Skipping /boot Feb 13 16:01:44.033848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:01:44.059484 systemd[1]: Reloading requested from client PID 1439 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:01:44.059504 systemd[1]: Reloading... Feb 13 16:01:44.089856 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:01:44.090230 systemd-tmpfiles[1443]: Skipping /boot Feb 13 16:01:44.200555 zram_generator::config[1485]: No configuration found. Feb 13 16:01:44.208574 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1404) Feb 13 16:01:44.496553 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 16:01:44.562986 systemd-networkd[1441]: lo: Link UP Feb 13 16:01:44.564642 systemd-networkd[1441]: lo: Gained carrier Feb 13 16:01:44.572442 systemd-networkd[1441]: Enumeration completed Feb 13 16:01:44.573222 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:44.573355 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:01:44.589114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:01:44.625554 kernel: mlx5_core e31b:00:02.0 enP58139s1: Link up Feb 13 16:01:44.646744 kernel: hv_netvsc 7c1e5276-79f5-7c1e-5276-79f57c1e5276 eth0: Data path switched to VF: enP58139s1 Feb 13 16:01:44.647172 systemd-networkd[1441]: enP58139s1: Link UP Feb 13 16:01:44.647350 systemd-networkd[1441]: eth0: Link UP Feb 13 16:01:44.647355 systemd-networkd[1441]: eth0: Gained carrier Feb 13 16:01:44.647381 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:44.652897 systemd-networkd[1441]: enP58139s1: Gained carrier Feb 13 16:01:44.680506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 16:01:44.684701 systemd[1]: Reloading finished in 624 ms. Feb 13 16:01:44.693637 systemd-networkd[1441]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 16:01:44.711459 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:01:44.715462 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:01:44.724324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:01:44.764982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:01:44.769863 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:01:44.808909 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:01:44.812265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:01:44.813732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:01:44.820638 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:01:44.827825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:01:44.834816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:01:44.838667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:01:44.847183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:01:44.851839 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:01:44.857643 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:01:44.876861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:01:44.880200 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:01:44.891848 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:01:44.903910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:01:44.907228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:01:44.912272 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:01:44.916342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:01:44.916965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:01:44.922445 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:01:44.922654 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:01:44.928001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:01:44.928642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:01:44.932869 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:01:44.933072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:01:44.936714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:01:44.943078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:01:44.949188 systemd[1]: Finished ensure-sysext.service. Feb 13 16:01:44.970245 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:01:44.974520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:01:44.974826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:01:44.976981 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:01:44.997870 augenrules[1644]: No rules Feb 13 16:01:44.998684 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:01:44.998937 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:01:45.077151 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:01:45.083176 systemd-resolved[1614]: Positive Trust Anchors: Feb 13 16:01:45.083206 systemd-resolved[1614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:01:45.083270 systemd-resolved[1614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:01:45.119270 systemd-resolved[1614]: Using system hostname 'ci-4186.1.1-a-f44757c054'. Feb 13 16:01:45.122594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:01:45.126969 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:01:45.131710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:01:45.132971 systemd[1]: Reached target network.target - Network. Feb 13 16:01:45.133249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:01:45.143872 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:01:45.154886 lvm[1651]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:01:45.170005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:01:45.172050 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:01:45.179938 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:01:45.212538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:01:45.954867 systemd-networkd[1441]: eth0: Gained IPv6LL Feb 13 16:01:45.955703 systemd-networkd[1441]: enP58139s1: Gained IPv6LL Feb 13 16:01:45.959117 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:01:45.962930 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:01:47.049823 ldconfig[1285]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:01:47.062421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:01:47.069904 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:01:47.093283 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:01:47.096683 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:01:47.099870 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:01:47.103241 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:01:47.106819 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:01:47.114822 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:01:47.118013 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:01:47.121499 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:01:47.121571 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:01:47.123980 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:01:47.127369 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:01:47.131804 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:01:47.143577 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:01:47.147123 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:01:47.150024 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:01:47.152553 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:01:47.155182 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:01:47.155213 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:01:47.163638 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 16:01:47.167741 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:01:47.175733 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:01:47.185723 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:01:47.190742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:01:47.202732 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:01:47.209804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:01:47.209865 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 16:01:47.216683 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 16:01:47.220950 jq[1667]: false Feb 13 16:01:47.223965 KVP[1672]: KVP starting; pid is:1672 Feb 13 16:01:47.225413 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 16:01:47.233678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:01:47.236575 (chronyd)[1663]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 16:01:47.242785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:01:47.254088 KVP[1672]: KVP LIC Version: 3.1 Feb 13 16:01:47.255426 kernel: hv_utils: KVP IC version 4.0 Feb 13 16:01:47.257690 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:01:47.260606 chronyd[1678]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 16:01:47.268803 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:01:47.277919 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:01:47.277966 chronyd[1678]: Timezone right/UTC failed leap second check, ignoring Feb 13 16:01:47.281728 chronyd[1678]: Loaded seccomp filter (level 2) Feb 13 16:01:47.285699 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:01:47.289190 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:01:47.289988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:01:47.302904 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:01:47.308769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:01:47.315520 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 16:01:47.328941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:01:47.329300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:01:47.338441 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:01:47.339180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:01:47.348454 extend-filesystems[1671]: Found loop4 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found loop5 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found loop6 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found loop7 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda1 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda2 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda3 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found usr Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda4 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda6 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda7 Feb 13 16:01:47.377764 extend-filesystems[1671]: Found sda9 Feb 13 16:01:47.377764 extend-filesystems[1671]: Checking size of /dev/sda9 Feb 13 16:01:47.483682 update_engine[1684]: I20250213 16:01:47.376453 1684 main.cc:92] Flatcar Update Engine starting Feb 13 16:01:47.483682 update_engine[1684]: I20250213 16:01:47.393385 1684 update_check_scheduler.cc:74] Next update check in 2m26s Feb 13 16:01:47.366255 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:01:47.366079 dbus-daemon[1666]: [system] SELinux support is enabled Feb 13 16:01:47.498576 extend-filesystems[1671]: Old size kept for /dev/sda9 Feb 13 16:01:47.498576 extend-filesystems[1671]: Found sr0 Feb 13 16:01:47.384711 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:01:47.514619 jq[1686]: true Feb 13 16:01:47.384752 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:01:47.393952 (ntainerd)[1702]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:01:47.522911 jq[1710]: true Feb 13 16:01:47.400854 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:01:47.400883 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:01:47.406161 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:01:47.406415 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:01:47.417624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:01:47.446728 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:01:47.450805 systemd-logind[1683]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 16:01:47.453882 systemd-logind[1683]: New seat seat0. Feb 13 16:01:47.465958 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:01:47.466198 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:01:47.486654 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:01:47.504786 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:01:47.546591 coreos-metadata[1665]: Feb 13 16:01:47.544 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 16:01:47.550026 coreos-metadata[1665]: Feb 13 16:01:47.549 INFO Fetch successful Feb 13 16:01:47.550026 coreos-metadata[1665]: Feb 13 16:01:47.549 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 16:01:47.558438 coreos-metadata[1665]: Feb 13 16:01:47.558 INFO Fetch successful Feb 13 16:01:47.560391 coreos-metadata[1665]: Feb 13 16:01:47.560 INFO Fetching http://168.63.129.16/machine/20f3921c-d802-4611-8080-7db5dda41607/c3bd63da%2D22d7%2D4f0c%2D8d8c%2D2892eaed279a.%5Fci%2D4186.1.1%2Da%2Df44757c054?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 16:01:47.562833 coreos-metadata[1665]: Feb 13 16:01:47.562 INFO Fetch successful Feb 13 16:01:47.563283 coreos-metadata[1665]: Feb 13 16:01:47.563 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 16:01:47.574639 coreos-metadata[1665]: Feb 13 16:01:47.574 INFO Fetch successful Feb 13 16:01:47.632710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1406) Feb 13 16:01:47.657706 bash[1744]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:01:47.660742 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:01:47.677220 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:01:47.698974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:01:47.701426 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 16:01:47.849827 locksmithd[1718]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:01:47.922038 sshd_keygen[1708]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:01:47.952277 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:01:47.962912 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:01:47.972591 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 16:01:47.991283 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:01:47.991617 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:01:48.013931 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:01:48.018017 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 16:01:48.035612 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:01:48.045990 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:01:48.057588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:01:48.060836 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:01:48.658817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:01:48.666807 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:01:48.715195 containerd[1702]: time="2025-02-13T16:01:48.714672000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 16:01:48.752559 containerd[1702]: time="2025-02-13T16:01:48.752269300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.754798 containerd[1702]: time="2025-02-13T16:01:48.754752500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.754914100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.754945500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755165800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755194600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755287100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755307700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755572200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755595400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755615300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755628800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.755752300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756408 containerd[1702]: time="2025-02-13T16:01:48.756045900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756859 containerd[1702]: time="2025-02-13T16:01:48.756217400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:01:48.756859 containerd[1702]: time="2025-02-13T16:01:48.756237100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:01:48.756859 containerd[1702]: time="2025-02-13T16:01:48.756330400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:01:48.756859 containerd[1702]: time="2025-02-13T16:01:48.756381400Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:01:48.768253 containerd[1702]: time="2025-02-13T16:01:48.768210100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:01:48.768464 containerd[1702]: time="2025-02-13T16:01:48.768432100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:01:48.768759 containerd[1702]: time="2025-02-13T16:01:48.768727800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:01:48.768830 containerd[1702]: time="2025-02-13T16:01:48.768808100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:01:48.768876 containerd[1702]: time="2025-02-13T16:01:48.768841400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:01:48.769040 containerd[1702]: time="2025-02-13T16:01:48.769016200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769389100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769546400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769570200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769591600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769611400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769630400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769647300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769666700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769689000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769707900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769726100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769750900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769779900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.769947 containerd[1702]: time="2025-02-13T16:01:48.769798200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769816800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769837000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769852700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769871200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769901400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769921800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769968000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.769989100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770012500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770029000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770045500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770065300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770094400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770112100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770462 containerd[1702]: time="2025-02-13T16:01:48.770136500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770213300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770237300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770252900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770342400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770357400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770380600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770395200Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:01:48.770937 containerd[1702]: time="2025-02-13T16:01:48.770409600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:01:48.771213 containerd[1702]: time="2025-02-13T16:01:48.770864400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:01:48.771213 containerd[1702]: time="2025-02-13T16:01:48.770939600Z" level=info msg="Connect containerd service" Feb 13 16:01:48.771213 containerd[1702]: time="2025-02-13T16:01:48.770982500Z" level=info msg="using legacy CRI server" Feb 13 16:01:48.771213 containerd[1702]: time="2025-02-13T16:01:48.770994400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:01:48.771213 containerd[1702]: time="2025-02-13T16:01:48.771199100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772184200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772315500Z" level=info msg="Start subscribing containerd event" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772381900Z" level=info msg="Start recovering state" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772457800Z" level=info msg="Start event monitor" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772474200Z" level=info msg="Start snapshots syncer" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772486100Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.772497200Z" level=info msg="Start streaming server" Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.773038100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:01:48.773786 containerd[1702]: time="2025-02-13T16:01:48.773104500Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:01:48.773375 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:01:48.778622 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:01:48.781652 containerd[1702]: time="2025-02-13T16:01:48.781627000Z" level=info msg="containerd successfully booted in 0.068304s" Feb 13 16:01:48.783243 systemd[1]: Startup finished in 726ms (firmware) + 21.927s (loader) + 1.052s (kernel) + 9.450s (initrd) + 9.566s (userspace) = 42.722s. Feb 13 16:01:48.815446 agetty[1818]: failed to open credentials directory Feb 13 16:01:48.816225 agetty[1819]: failed to open credentials directory Feb 13 16:01:49.002044 login[1818]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Feb 13 16:01:49.005019 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 16:01:49.017490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:01:49.025851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:01:49.034390 systemd-logind[1683]: New session 2 of user core. Feb 13 16:01:49.042515 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:01:49.050127 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:01:49.148972 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:01:49.369782 systemd[1844]: Queued start job for default target default.target. Feb 13 16:01:49.374906 systemd[1844]: Created slice app.slice - User Application Slice. Feb 13 16:01:49.374939 systemd[1844]: Reached target paths.target - Paths. Feb 13 16:01:49.374959 systemd[1844]: Reached target timers.target - Timers. Feb 13 16:01:49.378082 systemd[1844]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:01:49.400403 systemd[1844]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:01:49.400600 systemd[1844]: Reached target sockets.target - Sockets. Feb 13 16:01:49.400621 systemd[1844]: Reached target basic.target - Basic System. Feb 13 16:01:49.400751 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:01:49.401103 systemd[1844]: Reached target default.target - Main User Target. Feb 13 16:01:49.401152 systemd[1844]: Startup finished in 241ms. Feb 13 16:01:49.407075 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:01:49.617855 kubelet[1829]: E0213 16:01:49.617771 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:01:49.621031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:01:49.621246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:01:49.621745 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Feb 13 16:01:49.796432 waagent[1815]: 2025-02-13T16:01:49.796315Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.798035Z INFO Daemon Daemon OS: flatcar 4186.1.1 Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.799130Z INFO Daemon Daemon Python: 3.11.10 Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.800378Z INFO Daemon Daemon Run daemon Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.801405Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.1' Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.802200Z INFO Daemon Daemon Using waagent for provisioning Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.803215Z INFO Daemon Daemon Activate resource disk Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.804142Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.809883Z INFO Daemon Daemon Found device: None Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.810037Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.811026Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.811886Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 16:01:49.833666 waagent[1815]: 2025-02-13T16:01:49.812855Z INFO Daemon Daemon Running default provisioning handler Feb 13 16:01:49.837340 waagent[1815]: 2025-02-13T16:01:49.837257Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 16:01:49.844249 waagent[1815]: 2025-02-13T16:01:49.844188Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 16:01:49.853199 waagent[1815]: 2025-02-13T16:01:49.845245Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 16:01:49.853199 waagent[1815]: 2025-02-13T16:01:49.846197Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 16:01:49.948247 waagent[1815]: 2025-02-13T16:01:49.945452Z INFO Daemon Daemon Successfully mounted dvd Feb 13 16:01:49.971509 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 16:01:49.973612 waagent[1815]: 2025-02-13T16:01:49.973501Z INFO Daemon Daemon Detect protocol endpoint Feb 13 16:01:49.988575 waagent[1815]: 2025-02-13T16:01:49.975337Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 16:01:49.988575 waagent[1815]: 2025-02-13T16:01:49.977179Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 16:01:49.988575 waagent[1815]: 2025-02-13T16:01:49.978034Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 16:01:49.988575 waagent[1815]: 2025-02-13T16:01:49.979056Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 16:01:49.988575 waagent[1815]: 2025-02-13T16:01:49.979806Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 16:01:50.003988 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 16:01:50.009037 systemd-logind[1683]: New session 1 of user core. Feb 13 16:01:50.011373 waagent[1815]: 2025-02-13T16:01:50.011321Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 16:01:50.015697 waagent[1815]: 2025-02-13T16:01:50.015496Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 16:01:50.020040 waagent[1815]: 2025-02-13T16:01:50.016987Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 16:01:50.020304 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:01:50.129382 waagent[1815]: 2025-02-13T16:01:50.129259Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 16:01:50.138696 waagent[1815]: 2025-02-13T16:01:50.130725Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 16:01:50.138696 waagent[1815]: 2025-02-13T16:01:50.135018Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 16:01:50.151379 waagent[1815]: 2025-02-13T16:01:50.151318Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.152988Z INFO Daemon Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.154797Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: fd3eebae-7e1c-4369-86f2-9af01784f696 eTag: 13125538785935148255 source: Fabric] Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.156405Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.157513Z INFO Daemon Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.157924Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 16:01:50.165174 waagent[1815]: 2025-02-13T16:01:50.161953Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 16:01:50.315488 waagent[1815]: 2025-02-13T16:01:50.315324Z INFO Daemon Downloaded certificate {'thumbprint': '55B26F8D705B45D002ABDE334C303BE933049036', 'hasPrivateKey': True} Feb 13 16:01:50.325645 waagent[1815]: 2025-02-13T16:01:50.317041Z INFO Daemon Downloaded certificate {'thumbprint': '4587DBEDE3FD8401999DAE860B04BF39ECCDBAF2', 'hasPrivateKey': False} Feb 13 16:01:50.325645 waagent[1815]: 2025-02-13T16:01:50.318684Z INFO Daemon Fetch goal state completed Feb 13 16:01:50.350797 waagent[1815]: 2025-02-13T16:01:50.350673Z INFO Daemon Daemon Starting provisioning Feb 13 16:01:50.358287 waagent[1815]: 2025-02-13T16:01:50.352260Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 16:01:50.358287 waagent[1815]: 2025-02-13T16:01:50.353105Z INFO Daemon Daemon Set hostname [ci-4186.1.1-a-f44757c054] Feb 13 16:01:50.377377 waagent[1815]: 2025-02-13T16:01:50.377272Z INFO Daemon Daemon Publish hostname [ci-4186.1.1-a-f44757c054] Feb 13 16:01:50.385916 waagent[1815]: 2025-02-13T16:01:50.379124Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 16:01:50.385916 waagent[1815]: 2025-02-13T16:01:50.380210Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 16:01:50.401889 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:01:50.401898 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:01:50.401958 systemd-networkd[1441]: eth0: DHCP lease lost Feb 13 16:01:50.403223 waagent[1815]: 2025-02-13T16:01:50.403139Z INFO Daemon Daemon Create user account if not exists Feb 13 16:01:50.420144 waagent[1815]: 2025-02-13T16:01:50.404696Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 16:01:50.420144 waagent[1815]: 2025-02-13T16:01:50.405070Z INFO Daemon Daemon Configure sudoer Feb 13 16:01:50.420144 waagent[1815]: 2025-02-13T16:01:50.405831Z INFO Daemon Daemon Configure sshd Feb 13 16:01:50.420144 waagent[1815]: 2025-02-13T16:01:50.406600Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 16:01:50.420144 waagent[1815]: 2025-02-13T16:01:50.407287Z INFO Daemon Daemon Deploy ssh public key. Feb 13 16:01:50.421625 systemd-networkd[1441]: eth0: DHCPv6 lease lost Feb 13 16:01:50.459577 systemd-networkd[1441]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 16:01:59.871753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:01:59.878769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:01:59.990867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:00.000888 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:00.621802 kubelet[1909]: E0213 16:02:00.621727 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:00.626232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:00.626435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:02:10.844924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:02:10.855026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:02:10.950506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:10.955513 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:11.074974 chronyd[1678]: Selected source PHC0 Feb 13 16:02:11.515312 kubelet[1926]: E0213 16:02:11.515213 1926 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:11.518575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:11.518784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:02:20.480804 waagent[1815]: 2025-02-13T16:02:20.480718Z INFO Daemon Daemon Provisioning complete Feb 13 16:02:20.494398 waagent[1815]: 2025-02-13T16:02:20.494323Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 16:02:20.501555 waagent[1815]: 2025-02-13T16:02:20.495620Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 16:02:20.501555 waagent[1815]: 2025-02-13T16:02:20.496055Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 16:02:20.625085 waagent[1934]: 2025-02-13T16:02:20.624966Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 16:02:20.625597 waagent[1934]: 2025-02-13T16:02:20.625155Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.1 Feb 13 16:02:20.625597 waagent[1934]: 2025-02-13T16:02:20.625239Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 16:02:20.679881 waagent[1934]: 2025-02-13T16:02:20.679756Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 16:02:20.680174 waagent[1934]: 2025-02-13T16:02:20.680109Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 16:02:20.680295 waagent[1934]: 2025-02-13T16:02:20.680241Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 16:02:20.688606 waagent[1934]: 2025-02-13T16:02:20.688525Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 16:02:20.699431 waagent[1934]: 2025-02-13T16:02:20.699367Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 16:02:20.699971 waagent[1934]: 2025-02-13T16:02:20.699915Z INFO ExtHandler Feb 13 16:02:20.700057 waagent[1934]: 2025-02-13T16:02:20.700014Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9680b5d2-fe03-49da-afc6-97356fd6e5eb eTag: 13125538785935148255 source: Fabric] Feb 13 16:02:20.700384 waagent[1934]: 2025-02-13T16:02:20.700332Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 16:02:20.700996 waagent[1934]: 2025-02-13T16:02:20.700939Z INFO ExtHandler Feb 13 16:02:20.701071 waagent[1934]: 2025-02-13T16:02:20.701031Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 16:02:20.704519 waagent[1934]: 2025-02-13T16:02:20.704474Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 16:02:20.771778 waagent[1934]: 2025-02-13T16:02:20.771620Z INFO ExtHandler Downloaded certificate {'thumbprint': '55B26F8D705B45D002ABDE334C303BE933049036', 'hasPrivateKey': True} Feb 13 16:02:20.772204 waagent[1934]: 2025-02-13T16:02:20.772144Z INFO ExtHandler Downloaded certificate {'thumbprint': '4587DBEDE3FD8401999DAE860B04BF39ECCDBAF2', 'hasPrivateKey': False} Feb 13 16:02:20.772754 waagent[1934]: 2025-02-13T16:02:20.772703Z INFO ExtHandler Fetch goal state completed Feb 13 16:02:20.788192 waagent[1934]: 2025-02-13T16:02:20.788118Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1934 Feb 13 16:02:20.788364 waagent[1934]: 2025-02-13T16:02:20.788309Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 16:02:20.789976 waagent[1934]: 2025-02-13T16:02:20.789914Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 16:02:20.790373 waagent[1934]: 2025-02-13T16:02:20.790325Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 16:02:20.835581 waagent[1934]: 2025-02-13T16:02:20.835498Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 16:02:20.835904 waagent[1934]: 2025-02-13T16:02:20.835837Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 16:02:20.844662 waagent[1934]: 2025-02-13T16:02:20.844581Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 16:02:20.852266 systemd[1]: Reloading requested from client PID 1949 ('systemctl') (unit waagent.service)... Feb 13 16:02:20.852283 systemd[1]: Reloading... Feb 13 16:02:20.946567 zram_generator::config[1986]: No configuration found. Feb 13 16:02:21.064216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:02:21.153062 systemd[1]: Reloading finished in 300 ms. Feb 13 16:02:21.186554 waagent[1934]: 2025-02-13T16:02:21.182216Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 16:02:21.191373 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit waagent.service)... Feb 13 16:02:21.191390 systemd[1]: Reloading... Feb 13 16:02:21.270563 zram_generator::config[2070]: No configuration found. Feb 13 16:02:21.403382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:02:21.496141 systemd[1]: Reloading finished in 304 ms. Feb 13 16:02:21.525064 waagent[1934]: 2025-02-13T16:02:21.520404Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 16:02:21.525064 waagent[1934]: 2025-02-13T16:02:21.520648Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 16:02:21.528082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:02:21.539105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:02:21.733738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:21.744846 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:21.791873 kubelet[2145]: E0213 16:02:21.791752 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:21.795157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:21.795364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:02:22.411687 waagent[1934]: 2025-02-13T16:02:22.411584Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 16:02:22.412399 waagent[1934]: 2025-02-13T16:02:22.412336Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 16:02:22.413227 waagent[1934]: 2025-02-13T16:02:22.413163Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 16:02:22.413729 waagent[1934]: 2025-02-13T16:02:22.413676Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 16:02:22.413820 waagent[1934]: 2025-02-13T16:02:22.413766Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 16:02:22.414170 waagent[1934]: 2025-02-13T16:02:22.414119Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 16:02:22.414293 waagent[1934]: 2025-02-13T16:02:22.414218Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 16:02:22.414346 waagent[1934]: 2025-02-13T16:02:22.414270Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 16:02:22.414639 waagent[1934]: 2025-02-13T16:02:22.414587Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 16:02:22.414926 waagent[1934]: 2025-02-13T16:02:22.414877Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 16:02:22.414997 waagent[1934]: 2025-02-13T16:02:22.414950Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 16:02:22.415223 waagent[1934]: 2025-02-13T16:02:22.415155Z INFO EnvHandler ExtHandler Configure routes Feb 13 16:02:22.415484 waagent[1934]: 2025-02-13T16:02:22.415433Z INFO EnvHandler ExtHandler Gateway:None Feb 13 16:02:22.415630 waagent[1934]: 2025-02-13T16:02:22.415582Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 16:02:22.416007 waagent[1934]: 2025-02-13T16:02:22.415965Z INFO EnvHandler ExtHandler Routes:None Feb 13 16:02:22.416316 waagent[1934]: 2025-02-13T16:02:22.416261Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 16:02:22.419551 waagent[1934]: 2025-02-13T16:02:22.418049Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 16:02:22.419551 waagent[1934]: 2025-02-13T16:02:22.418298Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 16:02:22.419551 waagent[1934]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 16:02:22.419551 waagent[1934]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 16:02:22.419551 waagent[1934]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 16:02:22.419551 waagent[1934]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 16:02:22.419551 waagent[1934]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 16:02:22.419551 waagent[1934]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 16:02:22.426956 waagent[1934]: 2025-02-13T16:02:22.426912Z INFO ExtHandler ExtHandler Feb 13 16:02:22.427810 waagent[1934]: 2025-02-13T16:02:22.427768Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 600e7390-60af-4e48-99fe-e209a31b8050 correlation f3e6cde3-a6b3-422e-bdd1-1b6a64705ef7 created: 2025-02-13T16:00:55.262680Z] Feb 13 16:02:22.428392 waagent[1934]: 2025-02-13T16:02:22.428339Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 16:02:22.429828 waagent[1934]: 2025-02-13T16:02:22.429771Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 13 16:02:22.468220 waagent[1934]: 2025-02-13T16:02:22.468147Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 37122D4C-CE14-4786-A824-327B0AE4B649;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 16:02:22.493207 waagent[1934]: 2025-02-13T16:02:22.493125Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 16:02:22.493207 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.493207 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.493207 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.493207 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.493207 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.493207 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.493207 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 16:02:22.493207 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 16:02:22.493207 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 16:02:22.497504 waagent[1934]: 2025-02-13T16:02:22.497435Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 16:02:22.497504 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.497504 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.497504 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.497504 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.497504 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 16:02:22.497504 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 16:02:22.497504 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 16:02:22.497504 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 16:02:22.497504 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 16:02:22.497899 waagent[1934]: 2025-02-13T16:02:22.497809Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 16:02:22.498194 waagent[1934]: 2025-02-13T16:02:22.498141Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 16:02:22.498194 waagent[1934]: Executing ['ip', '-a', '-o', 'link']: Feb 13 16:02:22.498194 waagent[1934]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 16:02:22.498194 waagent[1934]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:79:f5 brd ff:ff:ff:ff:ff:ff Feb 13 16:02:22.498194 waagent[1934]: 3: enP58139s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:79:f5 brd ff:ff:ff:ff:ff:ff\ altname enP58139p0s2 Feb 13 16:02:22.498194 waagent[1934]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 16:02:22.498194 waagent[1934]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 16:02:22.498194 waagent[1934]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 16:02:22.498194 waagent[1934]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 16:02:22.498194 waagent[1934]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 16:02:22.498194 waagent[1934]: 2: eth0 inet6 fe80::7e1e:52ff:fe76:79f5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 16:02:22.498194 waagent[1934]: 3: enP58139s1 inet6 fe80::7e1e:52ff:fe76:79f5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 16:02:31.844791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 16:02:31.857814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:02:31.959624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:31.964376 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:32.010622 kubelet[2189]: E0213 16:02:32.010501 2189 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:32.013741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:32.013945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:02:32.023556 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 16:02:32.738843 update_engine[1684]: I20250213 16:02:32.738700 1684 update_attempter.cc:509] Updating boot flags... Feb 13 16:02:32.810566 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2213) Feb 13 16:02:32.942246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2217) Feb 13 16:02:33.073633 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2217) Feb 13 16:02:42.094858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 16:02:42.107782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:02:42.205154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:42.215857 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:42.261694 kubelet[2371]: E0213 16:02:42.261619 2371 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:42.264964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:42.265163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:02:52.345015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 16:02:52.350823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:02:52.707276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:02:52.717874 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:02:52.979189 kubelet[2387]: E0213 16:02:52.978976 2387 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:02:52.982115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:02:52.982318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:03:02.981403 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:03:02.986850 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.16.10:57136.service - OpenSSH per-connection server daemon (10.200.16.10:57136). Feb 13 16:03:02.987912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 16:03:02.989762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:03:03.639636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:03:03.644224 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:03:03.691852 kubelet[2407]: E0213 16:03:03.691794 2407 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:03:03.695098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:03:03.695325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:03:04.210948 sshd[2397]: Accepted publickey for core from 10.200.16.10 port 57136 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:04.212754 sshd-session[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:04.217841 systemd-logind[1683]: New session 3 of user core. Feb 13 16:03:04.224701 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:03:04.765859 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.16.10:57148.service - OpenSSH per-connection server daemon (10.200.16.10:57148). Feb 13 16:03:05.390390 sshd[2418]: Accepted publickey for core from 10.200.16.10 port 57148 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:05.392077 sshd-session[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:05.396442 systemd-logind[1683]: New session 4 of user core. Feb 13 16:03:05.406697 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:03:05.836959 sshd[2420]: Connection closed by 10.200.16.10 port 57148 Feb 13 16:03:05.838205 sshd-session[2418]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:05.842831 systemd[1]: sshd@1-10.200.8.22:22-10.200.16.10:57148.service: Deactivated successfully. Feb 13 16:03:05.844917 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:03:05.845650 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:03:05.846622 systemd-logind[1683]: Removed session 4. Feb 13 16:03:05.951865 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.16.10:57162.service - OpenSSH per-connection server daemon (10.200.16.10:57162). Feb 13 16:03:06.585784 sshd[2425]: Accepted publickey for core from 10.200.16.10 port 57162 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:06.587563 sshd-session[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:06.591836 systemd-logind[1683]: New session 5 of user core. Feb 13 16:03:06.598690 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:03:07.027423 sshd[2427]: Connection closed by 10.200.16.10 port 57162 Feb 13 16:03:07.028461 sshd-session[2425]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:07.033174 systemd[1]: sshd@2-10.200.8.22:22-10.200.16.10:57162.service: Deactivated successfully. Feb 13 16:03:07.035435 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:03:07.036190 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:03:07.037134 systemd-logind[1683]: Removed session 5. Feb 13 16:03:07.143088 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.16.10:57176.service - OpenSSH per-connection server daemon (10.200.16.10:57176). Feb 13 16:03:07.774018 sshd[2432]: Accepted publickey for core from 10.200.16.10 port 57176 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:07.775898 sshd-session[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:07.781674 systemd-logind[1683]: New session 6 of user core. Feb 13 16:03:07.788717 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:03:08.245613 sshd[2434]: Connection closed by 10.200.16.10 port 57176 Feb 13 16:03:08.246681 sshd-session[2432]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:08.250096 systemd[1]: sshd@3-10.200.8.22:22-10.200.16.10:57176.service: Deactivated successfully. Feb 13 16:03:08.252628 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:03:08.254328 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:03:08.255278 systemd-logind[1683]: Removed session 6. Feb 13 16:03:08.360865 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.16.10:57180.service - OpenSSH per-connection server daemon (10.200.16.10:57180). Feb 13 16:03:08.985011 sshd[2439]: Accepted publickey for core from 10.200.16.10 port 57180 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:08.986773 sshd-session[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:08.991068 systemd-logind[1683]: New session 7 of user core. Feb 13 16:03:09.000711 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:03:09.497992 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:03:09.498365 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:03:09.537227 sudo[2442]: pam_unix(sudo:session): session closed for user root Feb 13 16:03:09.661480 sshd[2441]: Connection closed by 10.200.16.10 port 57180 Feb 13 16:03:09.662757 sshd-session[2439]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:09.667801 systemd[1]: sshd@4-10.200.8.22:22-10.200.16.10:57180.service: Deactivated successfully. Feb 13 16:03:09.670187 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:03:09.671088 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:03:09.672209 systemd-logind[1683]: Removed session 7. Feb 13 16:03:09.778123 systemd[1]: Started sshd@5-10.200.8.22:22-10.200.16.10:46032.service - OpenSSH per-connection server daemon (10.200.16.10:46032). Feb 13 16:03:10.405407 sshd[2447]: Accepted publickey for core from 10.200.16.10 port 46032 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:10.407383 sshd-session[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:10.411716 systemd-logind[1683]: New session 8 of user core. Feb 13 16:03:10.418718 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:03:10.750245 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:03:10.750711 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:03:10.754278 sudo[2451]: pam_unix(sudo:session): session closed for user root Feb 13 16:03:10.759461 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 16:03:10.759846 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:03:10.775141 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:03:10.800958 augenrules[2473]: No rules Feb 13 16:03:10.802444 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:03:10.802726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:03:10.804466 sudo[2450]: pam_unix(sudo:session): session closed for user root Feb 13 16:03:10.928295 sshd[2449]: Connection closed by 10.200.16.10 port 46032 Feb 13 16:03:10.929324 sshd-session[2447]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:10.933678 systemd[1]: sshd@5-10.200.8.22:22-10.200.16.10:46032.service: Deactivated successfully. Feb 13 16:03:10.935727 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:03:10.936468 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:03:10.937396 systemd-logind[1683]: Removed session 8. Feb 13 16:03:11.039685 systemd[1]: Started sshd@6-10.200.8.22:22-10.200.16.10:46048.service - OpenSSH per-connection server daemon (10.200.16.10:46048). Feb 13 16:03:11.668107 sshd[2481]: Accepted publickey for core from 10.200.16.10 port 46048 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:11.669947 sshd-session[2481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:11.675855 systemd-logind[1683]: New session 9 of user core. Feb 13 16:03:11.682702 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:03:12.013009 sudo[2484]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:03:12.013369 sudo[2484]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:03:13.166064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:03:13.170835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:03:13.201574 systemd[1]: Reloading requested from client PID 2524 ('systemctl') (unit session-9.scope)... Feb 13 16:03:13.201595 systemd[1]: Reloading... Feb 13 16:03:13.321601 zram_generator::config[2563]: No configuration found. Feb 13 16:03:13.459736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:03:13.549883 systemd[1]: Reloading finished in 347 ms. Feb 13 16:03:13.610991 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:03:13.611112 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:03:13.611460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:03:13.617898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:03:13.901205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:03:13.912885 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:03:13.958611 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:03:13.958611 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:03:13.958611 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:03:13.959139 kubelet[2632]: I0213 16:03:13.958661 2632 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:03:14.420876 kubelet[2632]: I0213 16:03:14.420824 2632 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:03:14.420876 kubelet[2632]: I0213 16:03:14.420860 2632 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:03:14.421225 kubelet[2632]: I0213 16:03:14.421200 2632 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:03:14.439192 kubelet[2632]: I0213 16:03:14.439157 2632 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:03:14.450666 kubelet[2632]: I0213 16:03:14.450639 2632 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:03:14.450940 kubelet[2632]: I0213 16:03:14.450921 2632 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:03:14.451117 kubelet[2632]: I0213 16:03:14.451098 2632 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:03:14.451802 kubelet[2632]: I0213 16:03:14.451777 2632 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:03:14.451870 kubelet[2632]: I0213 16:03:14.451807 2632 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:03:14.451969 kubelet[2632]: I0213 16:03:14.451936 2632 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:03:14.452418 kubelet[2632]: I0213 16:03:14.452058 2632 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:03:14.452418 kubelet[2632]: I0213 16:03:14.452082 2632 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:03:14.452418 kubelet[2632]: I0213 16:03:14.452116 2632 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:03:14.452418 kubelet[2632]: I0213 16:03:14.452134 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:03:14.454204 kubelet[2632]: E0213 16:03:14.453949 2632 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:14.454204 kubelet[2632]: E0213 16:03:14.454018 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:14.454661 kubelet[2632]: I0213 16:03:14.454621 2632 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:03:14.458436 kubelet[2632]: I0213 16:03:14.458275 2632 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:03:14.461572 kubelet[2632]: W0213 16:03:14.459612 2632 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:03:14.462038 kubelet[2632]: I0213 16:03:14.462022 2632 server.go:1256] "Started kubelet" Feb 13 16:03:14.463543 kubelet[2632]: W0213 16:03:14.463355 2632 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.8.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 16:03:14.463543 kubelet[2632]: E0213 16:03:14.463392 2632 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 16:03:14.463543 kubelet[2632]: W0213 16:03:14.463459 2632 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 16:03:14.463543 kubelet[2632]: E0213 16:03:14.463472 2632 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 16:03:14.463543 kubelet[2632]: I0213 16:03:14.463516 2632 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:03:14.466388 kubelet[2632]: I0213 16:03:14.466366 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:03:14.467370 kubelet[2632]: I0213 16:03:14.466819 2632 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:03:14.468624 kubelet[2632]: I0213 16:03:14.468599 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:03:14.472737 kubelet[2632]: E0213 16:03:14.472709 2632 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.22.1823d0037afc9abc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.22,UID:10.200.8.22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.22,},FirstTimestamp:2025-02-13 16:03:14.461997756 +0000 UTC m=+0.544518748,LastTimestamp:2025-02-13 16:03:14.461997756 +0000 UTC m=+0.544518748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.22,}" Feb 13 16:03:14.473307 kubelet[2632]: I0213 16:03:14.473286 2632 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:03:14.475315 kubelet[2632]: E0213 16:03:14.475294 2632 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:03:14.477500 kubelet[2632]: E0213 16:03:14.477432 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:14.477500 kubelet[2632]: I0213 16:03:14.477468 2632 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:03:14.477623 kubelet[2632]: I0213 16:03:14.477579 2632 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:03:14.477667 kubelet[2632]: I0213 16:03:14.477627 2632 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:03:14.479313 kubelet[2632]: I0213 16:03:14.479292 2632 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:03:14.479417 kubelet[2632]: I0213 16:03:14.479392 2632 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:03:14.480480 kubelet[2632]: I0213 16:03:14.480459 2632 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:03:14.510085 kubelet[2632]: E0213 16:03:14.510018 2632 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.22\" not found" node="10.200.8.22" Feb 13 16:03:14.512219 kubelet[2632]: I0213 16:03:14.512184 2632 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:03:14.512219 kubelet[2632]: I0213 16:03:14.512217 2632 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:03:14.512357 kubelet[2632]: I0213 16:03:14.512236 2632 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:03:14.519540 kubelet[2632]: I0213 16:03:14.519478 2632 policy_none.go:49] "None policy: Start" Feb 13 16:03:14.520098 kubelet[2632]: I0213 16:03:14.520060 2632 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:03:14.520098 kubelet[2632]: I0213 16:03:14.520089 2632 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:03:14.529742 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:03:14.540450 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:03:14.543489 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:03:14.550864 kubelet[2632]: I0213 16:03:14.550481 2632 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:03:14.550864 kubelet[2632]: I0213 16:03:14.550815 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:03:14.551323 kubelet[2632]: I0213 16:03:14.551182 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:03:14.553800 kubelet[2632]: E0213 16:03:14.553008 2632 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.22\" not found" Feb 13 16:03:14.554672 kubelet[2632]: I0213 16:03:14.554035 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:03:14.554753 kubelet[2632]: I0213 16:03:14.554701 2632 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:03:14.554753 kubelet[2632]: I0213 16:03:14.554730 2632 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:03:14.555177 kubelet[2632]: E0213 16:03:14.554835 2632 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 16:03:14.578460 kubelet[2632]: I0213 16:03:14.578440 2632 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.22" Feb 13 16:03:14.583285 kubelet[2632]: I0213 16:03:14.583264 2632 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.22" Feb 13 16:03:14.592328 kubelet[2632]: E0213 16:03:14.592304 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:14.692616 kubelet[2632]: E0213 16:03:14.692426 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:14.793106 kubelet[2632]: E0213 16:03:14.793033 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:14.893776 kubelet[2632]: E0213 16:03:14.893699 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:14.994849 kubelet[2632]: E0213 16:03:14.994687 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.095475 kubelet[2632]: E0213 16:03:15.095401 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.196255 kubelet[2632]: E0213 16:03:15.196184 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.297251 kubelet[2632]: E0213 16:03:15.297035 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.398194 kubelet[2632]: E0213 16:03:15.398134 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.423638 kubelet[2632]: I0213 16:03:15.423323 2632 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 16:03:15.423638 kubelet[2632]: W0213 16:03:15.423519 2632 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:03:15.423638 kubelet[2632]: W0213 16:03:15.423577 2632 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:03:15.423638 kubelet[2632]: W0213 16:03:15.423606 2632 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:03:15.424153 sudo[2484]: pam_unix(sudo:session): session closed for user root Feb 13 16:03:15.454955 kubelet[2632]: E0213 16:03:15.454883 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:15.498828 kubelet[2632]: E0213 16:03:15.498759 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.524320 sshd[2483]: Connection closed by 10.200.16.10 port 46048 Feb 13 16:03:15.525223 sshd-session[2481]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:15.529932 systemd[1]: sshd@6-10.200.8.22:22-10.200.16.10:46048.service: Deactivated successfully. Feb 13 16:03:15.532076 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:03:15.533155 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:03:15.534286 systemd-logind[1683]: Removed session 9. Feb 13 16:03:15.599856 kubelet[2632]: E0213 16:03:15.599819 2632 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 13 16:03:15.700812 kubelet[2632]: I0213 16:03:15.700758 2632 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 16:03:15.701259 containerd[1702]: time="2025-02-13T16:03:15.701196737Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:03:15.702028 kubelet[2632]: I0213 16:03:15.701519 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 16:03:16.454759 kubelet[2632]: I0213 16:03:16.454699 2632 apiserver.go:52] "Watching apiserver" Feb 13 16:03:16.455360 kubelet[2632]: E0213 16:03:16.454984 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:16.459968 kubelet[2632]: I0213 16:03:16.459927 2632 topology_manager.go:215] "Topology Admit Handler" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" podNamespace="calico-system" podName="csi-node-driver-7djmb" Feb 13 16:03:16.462582 kubelet[2632]: I0213 16:03:16.460337 2632 topology_manager.go:215] "Topology Admit Handler" podUID="8cb2e15b-c20d-4633-8c94-ff37444f28ff" podNamespace="kube-system" podName="kube-proxy-59cps" Feb 13 16:03:16.462582 kubelet[2632]: I0213 16:03:16.460476 2632 topology_manager.go:215] "Topology Admit Handler" podUID="1a3875f9-832c-40ba-9cb5-1d4df7fecb67" podNamespace="calico-system" podName="calico-node-gpgtm" Feb 13 16:03:16.462582 kubelet[2632]: E0213 16:03:16.460596 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:16.476510 systemd[1]: Created slice kubepods-besteffort-pod1a3875f9_832c_40ba_9cb5_1d4df7fecb67.slice - libcontainer container kubepods-besteffort-pod1a3875f9_832c_40ba_9cb5_1d4df7fecb67.slice. Feb 13 16:03:16.478116 kubelet[2632]: I0213 16:03:16.478094 2632 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:03:16.486458 kubelet[2632]: I0213 16:03:16.486427 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a471907f-254d-4cb5-b3e1-26f5a10be156-varrun\") pod \"csi-node-driver-7djmb\" (UID: \"a471907f-254d-4cb5-b3e1-26f5a10be156\") " pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:16.486568 kubelet[2632]: I0213 16:03:16.486469 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a471907f-254d-4cb5-b3e1-26f5a10be156-kubelet-dir\") pod \"csi-node-driver-7djmb\" (UID: \"a471907f-254d-4cb5-b3e1-26f5a10be156\") " pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:16.486568 kubelet[2632]: I0213 16:03:16.486498 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a471907f-254d-4cb5-b3e1-26f5a10be156-registration-dir\") pod \"csi-node-driver-7djmb\" (UID: \"a471907f-254d-4cb5-b3e1-26f5a10be156\") " pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:16.486568 kubelet[2632]: I0213 16:03:16.486540 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd68v\" (UniqueName: \"kubernetes.io/projected/a471907f-254d-4cb5-b3e1-26f5a10be156-kube-api-access-cd68v\") pod \"csi-node-driver-7djmb\" (UID: \"a471907f-254d-4cb5-b3e1-26f5a10be156\") " pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:16.486693 kubelet[2632]: I0213 16:03:16.486577 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cb2e15b-c20d-4633-8c94-ff37444f28ff-xtables-lock\") pod \"kube-proxy-59cps\" (UID: \"8cb2e15b-c20d-4633-8c94-ff37444f28ff\") " pod="kube-system/kube-proxy-59cps" Feb 13 16:03:16.486693 kubelet[2632]: I0213 16:03:16.486606 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-xtables-lock\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.486693 kubelet[2632]: I0213 16:03:16.486637 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-flexvol-driver-host\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.486693 kubelet[2632]: I0213 16:03:16.486666 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56d9s\" (UniqueName: \"kubernetes.io/projected/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-kube-api-access-56d9s\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.486850 kubelet[2632]: I0213 16:03:16.486696 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cb2e15b-c20d-4633-8c94-ff37444f28ff-kube-proxy\") pod \"kube-proxy-59cps\" (UID: \"8cb2e15b-c20d-4633-8c94-ff37444f28ff\") " pod="kube-system/kube-proxy-59cps" Feb 13 16:03:16.486850 kubelet[2632]: I0213 16:03:16.486724 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cb2e15b-c20d-4633-8c94-ff37444f28ff-lib-modules\") pod \"kube-proxy-59cps\" (UID: \"8cb2e15b-c20d-4633-8c94-ff37444f28ff\") " pod="kube-system/kube-proxy-59cps" Feb 13 16:03:16.486850 kubelet[2632]: I0213 16:03:16.486752 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-tigera-ca-bundle\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.486850 kubelet[2632]: I0213 16:03:16.486783 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-var-run-calico\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.486850 kubelet[2632]: I0213 16:03:16.486813 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-var-lib-calico\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487038 kubelet[2632]: I0213 16:03:16.486842 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-cni-log-dir\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487038 kubelet[2632]: I0213 16:03:16.486872 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a471907f-254d-4cb5-b3e1-26f5a10be156-socket-dir\") pod \"csi-node-driver-7djmb\" (UID: \"a471907f-254d-4cb5-b3e1-26f5a10be156\") " pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:16.487038 kubelet[2632]: I0213 16:03:16.486902 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbbt\" (UniqueName: \"kubernetes.io/projected/8cb2e15b-c20d-4633-8c94-ff37444f28ff-kube-api-access-jrbbt\") pod \"kube-proxy-59cps\" (UID: \"8cb2e15b-c20d-4633-8c94-ff37444f28ff\") " pod="kube-system/kube-proxy-59cps" Feb 13 16:03:16.487038 kubelet[2632]: I0213 16:03:16.486931 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-lib-modules\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487038 kubelet[2632]: I0213 16:03:16.486961 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-node-certs\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487219 kubelet[2632]: I0213 16:03:16.486991 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-cni-bin-dir\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487219 kubelet[2632]: I0213 16:03:16.487019 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-policysync\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.487219 kubelet[2632]: I0213 16:03:16.487047 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1a3875f9-832c-40ba-9cb5-1d4df7fecb67-cni-net-dir\") pod \"calico-node-gpgtm\" (UID: \"1a3875f9-832c-40ba-9cb5-1d4df7fecb67\") " pod="calico-system/calico-node-gpgtm" Feb 13 16:03:16.491018 systemd[1]: Created slice kubepods-besteffort-pod8cb2e15b_c20d_4633_8c94_ff37444f28ff.slice - libcontainer container kubepods-besteffort-pod8cb2e15b_c20d_4633_8c94_ff37444f28ff.slice. Feb 13 16:03:16.589235 kubelet[2632]: E0213 16:03:16.589029 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.589235 kubelet[2632]: W0213 16:03:16.589079 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.589235 kubelet[2632]: E0213 16:03:16.589109 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.589661 kubelet[2632]: E0213 16:03:16.589442 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.589661 kubelet[2632]: W0213 16:03:16.589455 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.589661 kubelet[2632]: E0213 16:03:16.589502 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.589879 kubelet[2632]: E0213 16:03:16.589820 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.589879 kubelet[2632]: W0213 16:03:16.589834 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.590057 kubelet[2632]: E0213 16:03:16.589866 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.590165 kubelet[2632]: E0213 16:03:16.590125 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.590165 kubelet[2632]: W0213 16:03:16.590158 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.590276 kubelet[2632]: E0213 16:03:16.590187 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.590523 kubelet[2632]: E0213 16:03:16.590500 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.590523 kubelet[2632]: W0213 16:03:16.590519 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.590707 kubelet[2632]: E0213 16:03:16.590641 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.590887 kubelet[2632]: E0213 16:03:16.590867 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.590887 kubelet[2632]: W0213 16:03:16.590884 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.591075 kubelet[2632]: E0213 16:03:16.591003 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.591181 kubelet[2632]: E0213 16:03:16.591162 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.591181 kubelet[2632]: W0213 16:03:16.591177 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.591403 kubelet[2632]: E0213 16:03:16.591215 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.591576 kubelet[2632]: E0213 16:03:16.591417 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.591576 kubelet[2632]: W0213 16:03:16.591428 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.591576 kubelet[2632]: E0213 16:03:16.591470 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.591849 kubelet[2632]: E0213 16:03:16.591677 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.591849 kubelet[2632]: W0213 16:03:16.591688 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.591849 kubelet[2632]: E0213 16:03:16.591793 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.592090 kubelet[2632]: E0213 16:03:16.591956 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.592090 kubelet[2632]: W0213 16:03:16.591968 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.592090 kubelet[2632]: E0213 16:03:16.592073 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.592306 kubelet[2632]: E0213 16:03:16.592231 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.592306 kubelet[2632]: W0213 16:03:16.592242 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.592486 kubelet[2632]: E0213 16:03:16.592346 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.592575 kubelet[2632]: E0213 16:03:16.592500 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.592575 kubelet[2632]: W0213 16:03:16.592511 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.592707 kubelet[2632]: E0213 16:03:16.592646 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.592840 kubelet[2632]: E0213 16:03:16.592815 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.592931 kubelet[2632]: W0213 16:03:16.592832 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.593215 kubelet[2632]: E0213 16:03:16.593185 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.593215 kubelet[2632]: E0213 16:03:16.593194 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.593396 kubelet[2632]: W0213 16:03:16.593223 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.593396 kubelet[2632]: E0213 16:03:16.593286 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.593981 kubelet[2632]: E0213 16:03:16.593650 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.593981 kubelet[2632]: W0213 16:03:16.593663 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.593981 kubelet[2632]: E0213 16:03:16.593789 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.593981 kubelet[2632]: E0213 16:03:16.593960 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.593981 kubelet[2632]: W0213 16:03:16.593971 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.594229 kubelet[2632]: E0213 16:03:16.594060 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.594229 kubelet[2632]: E0213 16:03:16.594226 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.594323 kubelet[2632]: W0213 16:03:16.594236 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.594323 kubelet[2632]: E0213 16:03:16.594320 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.594487 kubelet[2632]: E0213 16:03:16.594473 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.594487 kubelet[2632]: W0213 16:03:16.594482 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.594608 kubelet[2632]: E0213 16:03:16.594573 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.594846 kubelet[2632]: E0213 16:03:16.594733 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.594846 kubelet[2632]: W0213 16:03:16.594746 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.594846 kubelet[2632]: E0213 16:03:16.594832 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.595049 kubelet[2632]: E0213 16:03:16.594979 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.595049 kubelet[2632]: W0213 16:03:16.594989 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.595155 kubelet[2632]: E0213 16:03:16.595075 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.595220 kubelet[2632]: E0213 16:03:16.595194 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.595220 kubelet[2632]: W0213 16:03:16.595203 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.595354 kubelet[2632]: E0213 16:03:16.595286 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.595443 kubelet[2632]: E0213 16:03:16.595430 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.595495 kubelet[2632]: W0213 16:03:16.595444 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.595643 kubelet[2632]: E0213 16:03:16.595548 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.595744 kubelet[2632]: E0213 16:03:16.595680 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.595744 kubelet[2632]: W0213 16:03:16.595690 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.595744 kubelet[2632]: E0213 16:03:16.595722 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.595915 kubelet[2632]: E0213 16:03:16.595882 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.595915 kubelet[2632]: W0213 16:03:16.595892 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.595992 kubelet[2632]: E0213 16:03:16.595979 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.596118 kubelet[2632]: E0213 16:03:16.596102 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.596118 kubelet[2632]: W0213 16:03:16.596114 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.596270 kubelet[2632]: E0213 16:03:16.596179 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.596373 kubelet[2632]: E0213 16:03:16.596315 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.596373 kubelet[2632]: W0213 16:03:16.596324 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.596373 kubelet[2632]: E0213 16:03:16.596348 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.596522 kubelet[2632]: E0213 16:03:16.596511 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.596522 kubelet[2632]: W0213 16:03:16.596519 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.597021 kubelet[2632]: E0213 16:03:16.596561 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.597021 kubelet[2632]: E0213 16:03:16.596706 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.597021 kubelet[2632]: W0213 16:03:16.596714 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.597021 kubelet[2632]: E0213 16:03:16.596891 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.597021 kubelet[2632]: W0213 16:03:16.596902 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.597694 kubelet[2632]: E0213 16:03:16.597675 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.597694 kubelet[2632]: W0213 16:03:16.597693 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599045 kubelet[2632]: E0213 16:03:16.598684 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599045 kubelet[2632]: W0213 16:03:16.598701 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599045 kubelet[2632]: E0213 16:03:16.598883 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599045 kubelet[2632]: W0213 16:03:16.598892 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599317 kubelet[2632]: E0213 16:03:16.599067 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599317 kubelet[2632]: W0213 16:03:16.599076 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599317 kubelet[2632]: E0213 16:03:16.599281 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599317 kubelet[2632]: W0213 16:03:16.599290 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599489 kubelet[2632]: E0213 16:03:16.599471 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599546 kubelet[2632]: W0213 16:03:16.599490 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599632 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599741 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599774 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599823 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.599994 kubelet[2632]: W0213 16:03:16.599832 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599847 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599867 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599886 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.599994 kubelet[2632]: E0213 16:03:16.599937 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.600371 kubelet[2632]: E0213 16:03:16.600039 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.601881 kubelet[2632]: E0213 16:03:16.601735 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.601881 kubelet[2632]: W0213 16:03:16.601750 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.601881 kubelet[2632]: E0213 16:03:16.601768 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.606610 kubelet[2632]: E0213 16:03:16.606593 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.611558 kubelet[2632]: W0213 16:03:16.610318 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.606707 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.610437 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.610650 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.611558 kubelet[2632]: W0213 16:03:16.610662 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.610757 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.611265 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.611558 kubelet[2632]: W0213 16:03:16.611277 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.611558 kubelet[2632]: E0213 16:03:16.611523 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.611558 kubelet[2632]: W0213 16:03:16.611552 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.611638 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.611812 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.611907 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.612395 kubelet[2632]: W0213 16:03:16.611918 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.612026 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.612157 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.612395 kubelet[2632]: W0213 16:03:16.612167 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.612395 kubelet[2632]: E0213 16:03:16.612210 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.612746 kubelet[2632]: E0213 16:03:16.612435 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.612746 kubelet[2632]: W0213 16:03:16.612444 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.612746 kubelet[2632]: E0213 16:03:16.612496 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.612746 kubelet[2632]: E0213 16:03:16.612707 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.612746 kubelet[2632]: W0213 16:03:16.612716 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.612932 kubelet[2632]: E0213 16:03:16.612765 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.613429 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.618546 kubelet[2632]: W0213 16:03:16.613444 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.613658 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.618546 kubelet[2632]: W0213 16:03:16.613669 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.613757 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.613797 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.614680 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.618546 kubelet[2632]: W0213 16:03:16.614691 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.614785 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.618546 kubelet[2632]: E0213 16:03:16.616717 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.618945 kubelet[2632]: W0213 16:03:16.616732 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.618945 kubelet[2632]: E0213 16:03:16.616753 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.625754 kubelet[2632]: E0213 16:03:16.625734 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:16.625754 kubelet[2632]: W0213 16:03:16.625753 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:16.625889 kubelet[2632]: E0213 16:03:16.625868 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:16.790477 containerd[1702]: time="2025-02-13T16:03:16.790312587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gpgtm,Uid:1a3875f9-832c-40ba-9cb5-1d4df7fecb67,Namespace:calico-system,Attempt:0,}" Feb 13 16:03:16.795296 containerd[1702]: time="2025-02-13T16:03:16.794859005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59cps,Uid:8cb2e15b-c20d-4633-8c94-ff37444f28ff,Namespace:kube-system,Attempt:0,}" Feb 13 16:03:17.455568 kubelet[2632]: E0213 16:03:17.455510 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:17.556506 kubelet[2632]: E0213 16:03:17.556470 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:17.557109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674788155.mount: Deactivated successfully. Feb 13 16:03:17.587683 containerd[1702]: time="2025-02-13T16:03:17.587631767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:03:17.593871 containerd[1702]: time="2025-02-13T16:03:17.593796727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 16:03:17.598223 containerd[1702]: time="2025-02-13T16:03:17.598179441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:03:17.601498 containerd[1702]: time="2025-02-13T16:03:17.601462226Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:03:17.603884 containerd[1702]: time="2025-02-13T16:03:17.603838888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:03:17.607757 containerd[1702]: time="2025-02-13T16:03:17.607709088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:03:17.609144 containerd[1702]: time="2025-02-13T16:03:17.608567810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 818.11682ms" Feb 13 16:03:17.613682 containerd[1702]: time="2025-02-13T16:03:17.613648342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 818.652534ms" Feb 13 16:03:18.288631 containerd[1702]: time="2025-02-13T16:03:18.288218439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:03:18.288631 containerd[1702]: time="2025-02-13T16:03:18.288371743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:03:18.288631 containerd[1702]: time="2025-02-13T16:03:18.288394343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:18.288631 containerd[1702]: time="2025-02-13T16:03:18.288491346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:18.291641 containerd[1702]: time="2025-02-13T16:03:18.287857430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:03:18.291641 containerd[1702]: time="2025-02-13T16:03:18.291126714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:03:18.291641 containerd[1702]: time="2025-02-13T16:03:18.291143515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:18.291641 containerd[1702]: time="2025-02-13T16:03:18.291235817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:18.456340 kubelet[2632]: E0213 16:03:18.456254 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:18.612797 systemd[1]: run-containerd-runc-k8s.io-1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283-runc.R18mA6.mount: Deactivated successfully. Feb 13 16:03:18.622879 systemd[1]: Started cri-containerd-1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283.scope - libcontainer container 1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283. Feb 13 16:03:18.625329 systemd[1]: Started cri-containerd-beff613bd2f893e022220cdc18d393cb11c73e09a71fd47834b3476b746c6a63.scope - libcontainer container beff613bd2f893e022220cdc18d393cb11c73e09a71fd47834b3476b746c6a63. Feb 13 16:03:18.662904 containerd[1702]: time="2025-02-13T16:03:18.662291341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59cps,Uid:8cb2e15b-c20d-4633-8c94-ff37444f28ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"beff613bd2f893e022220cdc18d393cb11c73e09a71fd47834b3476b746c6a63\"" Feb 13 16:03:18.667042 containerd[1702]: time="2025-02-13T16:03:18.666587453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gpgtm,Uid:1a3875f9-832c-40ba-9cb5-1d4df7fecb67,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\"" Feb 13 16:03:18.667477 containerd[1702]: time="2025-02-13T16:03:18.667449675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:03:19.456910 kubelet[2632]: E0213 16:03:19.456846 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:19.555429 kubelet[2632]: E0213 16:03:19.555390 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:19.843873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441867295.mount: Deactivated successfully. Feb 13 16:03:20.301578 containerd[1702]: time="2025-02-13T16:03:20.301395356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:20.306393 containerd[1702]: time="2025-02-13T16:03:20.306335284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620600" Feb 13 16:03:20.310543 containerd[1702]: time="2025-02-13T16:03:20.310284587Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:20.318706 containerd[1702]: time="2025-02-13T16:03:20.318677604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:20.319431 containerd[1702]: time="2025-02-13T16:03:20.319253219Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.651765043s" Feb 13 16:03:20.319431 containerd[1702]: time="2025-02-13T16:03:20.319290320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 16:03:20.320387 containerd[1702]: time="2025-02-13T16:03:20.319992338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 16:03:20.321522 containerd[1702]: time="2025-02-13T16:03:20.321491977Z" level=info msg="CreateContainer within sandbox \"beff613bd2f893e022220cdc18d393cb11c73e09a71fd47834b3476b746c6a63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:03:20.373755 containerd[1702]: time="2025-02-13T16:03:20.373701732Z" level=info msg="CreateContainer within sandbox \"beff613bd2f893e022220cdc18d393cb11c73e09a71fd47834b3476b746c6a63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc3ce36ef4e1db7dc8ab56a040ab2128687fe49a81c931659fe3dd50546f0781\"" Feb 13 16:03:20.374601 containerd[1702]: time="2025-02-13T16:03:20.374504852Z" level=info msg="StartContainer for \"cc3ce36ef4e1db7dc8ab56a040ab2128687fe49a81c931659fe3dd50546f0781\"" Feb 13 16:03:20.405724 systemd[1]: Started cri-containerd-cc3ce36ef4e1db7dc8ab56a040ab2128687fe49a81c931659fe3dd50546f0781.scope - libcontainer container cc3ce36ef4e1db7dc8ab56a040ab2128687fe49a81c931659fe3dd50546f0781. Feb 13 16:03:20.437725 containerd[1702]: time="2025-02-13T16:03:20.437551188Z" level=info msg="StartContainer for \"cc3ce36ef4e1db7dc8ab56a040ab2128687fe49a81c931659fe3dd50546f0781\" returns successfully" Feb 13 16:03:20.457985 kubelet[2632]: E0213 16:03:20.457916 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:20.586257 kubelet[2632]: I0213 16:03:20.586109 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-59cps" podStartSLOduration=4.9327284559999995 podStartE2EDuration="6.58606774s" podCreationTimestamp="2025-02-13 16:03:14 +0000 UTC" firstStartedPulling="2025-02-13 16:03:18.666525151 +0000 UTC m=+4.749046143" lastFinishedPulling="2025-02-13 16:03:20.319864435 +0000 UTC m=+6.402385427" observedRunningTime="2025-02-13 16:03:20.585507325 +0000 UTC m=+6.668028317" watchObservedRunningTime="2025-02-13 16:03:20.58606774 +0000 UTC m=+6.668588732" Feb 13 16:03:20.610759 kubelet[2632]: E0213 16:03:20.610723 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.610759 kubelet[2632]: W0213 16:03:20.610752 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.611069 kubelet[2632]: E0213 16:03:20.610782 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.611312 kubelet[2632]: E0213 16:03:20.611183 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.611312 kubelet[2632]: W0213 16:03:20.611199 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.611312 kubelet[2632]: E0213 16:03:20.611218 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.611814 kubelet[2632]: E0213 16:03:20.611625 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.611814 kubelet[2632]: W0213 16:03:20.611641 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.611814 kubelet[2632]: E0213 16:03:20.611657 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.612230 kubelet[2632]: E0213 16:03:20.612027 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.612230 kubelet[2632]: W0213 16:03:20.612041 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.612230 kubelet[2632]: E0213 16:03:20.612057 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.612591 kubelet[2632]: E0213 16:03:20.612433 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.612591 kubelet[2632]: W0213 16:03:20.612446 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.612591 kubelet[2632]: E0213 16:03:20.612469 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.613060 kubelet[2632]: E0213 16:03:20.612874 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.613060 kubelet[2632]: W0213 16:03:20.612897 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.613060 kubelet[2632]: E0213 16:03:20.612916 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.613458 kubelet[2632]: E0213 16:03:20.613270 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.613458 kubelet[2632]: W0213 16:03:20.613284 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.613458 kubelet[2632]: E0213 16:03:20.613301 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.613815 kubelet[2632]: E0213 16:03:20.613689 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.613815 kubelet[2632]: W0213 16:03:20.613704 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.613815 kubelet[2632]: E0213 16:03:20.613720 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.614269 kubelet[2632]: E0213 16:03:20.614077 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.614269 kubelet[2632]: W0213 16:03:20.614092 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.614269 kubelet[2632]: E0213 16:03:20.614109 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.614589 kubelet[2632]: E0213 16:03:20.614461 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.614589 kubelet[2632]: W0213 16:03:20.614475 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.614589 kubelet[2632]: E0213 16:03:20.614492 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.615056 kubelet[2632]: E0213 16:03:20.614867 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.615056 kubelet[2632]: W0213 16:03:20.614880 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.615056 kubelet[2632]: E0213 16:03:20.614897 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.615429 kubelet[2632]: E0213 16:03:20.615286 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.615429 kubelet[2632]: W0213 16:03:20.615301 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.615429 kubelet[2632]: E0213 16:03:20.615327 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.615942 kubelet[2632]: E0213 16:03:20.615742 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.615942 kubelet[2632]: W0213 16:03:20.615757 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.615942 kubelet[2632]: E0213 16:03:20.615776 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.616345 kubelet[2632]: E0213 16:03:20.616156 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.616345 kubelet[2632]: W0213 16:03:20.616169 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.616345 kubelet[2632]: E0213 16:03:20.616187 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.616837 kubelet[2632]: E0213 16:03:20.616605 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.616837 kubelet[2632]: W0213 16:03:20.616619 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.616837 kubelet[2632]: E0213 16:03:20.616636 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.618261 kubelet[2632]: E0213 16:03:20.617916 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.618261 kubelet[2632]: W0213 16:03:20.617929 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.618261 kubelet[2632]: E0213 16:03:20.617947 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.618628 kubelet[2632]: E0213 16:03:20.618503 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.618628 kubelet[2632]: W0213 16:03:20.618517 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.618628 kubelet[2632]: E0213 16:03:20.618551 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.619118 kubelet[2632]: E0213 16:03:20.618935 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.619118 kubelet[2632]: W0213 16:03:20.618949 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.619118 kubelet[2632]: E0213 16:03:20.618967 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.619514 kubelet[2632]: E0213 16:03:20.619331 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.619514 kubelet[2632]: W0213 16:03:20.619344 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.619514 kubelet[2632]: E0213 16:03:20.619361 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.620038 kubelet[2632]: E0213 16:03:20.619772 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.620038 kubelet[2632]: W0213 16:03:20.619787 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.620038 kubelet[2632]: E0213 16:03:20.619805 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.620444 kubelet[2632]: E0213 16:03:20.620324 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.620444 kubelet[2632]: W0213 16:03:20.620339 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.620444 kubelet[2632]: E0213 16:03:20.620358 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.621012 kubelet[2632]: E0213 16:03:20.620862 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.621012 kubelet[2632]: W0213 16:03:20.620876 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.621012 kubelet[2632]: E0213 16:03:20.620898 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.621455 kubelet[2632]: E0213 16:03:20.621292 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.621455 kubelet[2632]: W0213 16:03:20.621314 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.621455 kubelet[2632]: E0213 16:03:20.621348 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.621988 kubelet[2632]: E0213 16:03:20.621766 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.621988 kubelet[2632]: W0213 16:03:20.621781 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.621988 kubelet[2632]: E0213 16:03:20.621811 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.622311 kubelet[2632]: E0213 16:03:20.622180 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.622311 kubelet[2632]: W0213 16:03:20.622194 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.622311 kubelet[2632]: E0213 16:03:20.622281 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.622760 kubelet[2632]: E0213 16:03:20.622622 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.622760 kubelet[2632]: W0213 16:03:20.622638 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.622760 kubelet[2632]: E0213 16:03:20.622659 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.623224 kubelet[2632]: E0213 16:03:20.623084 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.623224 kubelet[2632]: W0213 16:03:20.623098 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.623224 kubelet[2632]: E0213 16:03:20.623130 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.623672 kubelet[2632]: E0213 16:03:20.623501 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.623672 kubelet[2632]: W0213 16:03:20.623515 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.623672 kubelet[2632]: E0213 16:03:20.623625 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.624337 kubelet[2632]: E0213 16:03:20.624010 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.624337 kubelet[2632]: W0213 16:03:20.624024 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.624337 kubelet[2632]: E0213 16:03:20.624044 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.624698 kubelet[2632]: E0213 16:03:20.624684 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.624858 kubelet[2632]: W0213 16:03:20.624773 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.624858 kubelet[2632]: E0213 16:03:20.624819 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.625433 kubelet[2632]: E0213 16:03:20.625180 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.625433 kubelet[2632]: W0213 16:03:20.625194 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.625433 kubelet[2632]: E0213 16:03:20.625213 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:20.625854 kubelet[2632]: E0213 16:03:20.625798 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:20.625854 kubelet[2632]: W0213 16:03:20.625813 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:20.625854 kubelet[2632]: E0213 16:03:20.625830 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.458137 kubelet[2632]: E0213 16:03:21.458089 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:21.556002 kubelet[2632]: E0213 16:03:21.555951 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:21.628371 kubelet[2632]: E0213 16:03:21.628325 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.628371 kubelet[2632]: W0213 16:03:21.628358 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.628716 kubelet[2632]: E0213 16:03:21.628392 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.628777 kubelet[2632]: E0213 16:03:21.628734 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.628777 kubelet[2632]: W0213 16:03:21.628750 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.628777 kubelet[2632]: E0213 16:03:21.628774 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.629052 kubelet[2632]: E0213 16:03:21.629029 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.629052 kubelet[2632]: W0213 16:03:21.629046 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.629202 kubelet[2632]: E0213 16:03:21.629068 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.629337 kubelet[2632]: E0213 16:03:21.629318 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.629337 kubelet[2632]: W0213 16:03:21.629334 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.629472 kubelet[2632]: E0213 16:03:21.629353 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.629624 kubelet[2632]: E0213 16:03:21.629606 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.629712 kubelet[2632]: W0213 16:03:21.629621 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.629712 kubelet[2632]: E0213 16:03:21.629644 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.629890 kubelet[2632]: E0213 16:03:21.629872 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.629890 kubelet[2632]: W0213 16:03:21.629887 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.630035 kubelet[2632]: E0213 16:03:21.629905 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.630148 kubelet[2632]: E0213 16:03:21.630128 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.630148 kubelet[2632]: W0213 16:03:21.630142 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.630277 kubelet[2632]: E0213 16:03:21.630162 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.630395 kubelet[2632]: E0213 16:03:21.630378 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.630395 kubelet[2632]: W0213 16:03:21.630395 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.630571 kubelet[2632]: E0213 16:03:21.630412 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.630681 kubelet[2632]: E0213 16:03:21.630661 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.630681 kubelet[2632]: W0213 16:03:21.630676 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.630828 kubelet[2632]: E0213 16:03:21.630695 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.630924 kubelet[2632]: E0213 16:03:21.630903 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.630924 kubelet[2632]: W0213 16:03:21.630918 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.631063 kubelet[2632]: E0213 16:03:21.630936 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.631178 kubelet[2632]: E0213 16:03:21.631159 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.631178 kubelet[2632]: W0213 16:03:21.631172 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.631290 kubelet[2632]: E0213 16:03:21.631193 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.631423 kubelet[2632]: E0213 16:03:21.631406 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.631423 kubelet[2632]: W0213 16:03:21.631419 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.631592 kubelet[2632]: E0213 16:03:21.631437 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.631720 kubelet[2632]: E0213 16:03:21.631699 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.631720 kubelet[2632]: W0213 16:03:21.631715 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.631865 kubelet[2632]: E0213 16:03:21.631735 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.631979 kubelet[2632]: E0213 16:03:21.631961 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.631979 kubelet[2632]: W0213 16:03:21.631975 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.632119 kubelet[2632]: E0213 16:03:21.631993 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.632233 kubelet[2632]: E0213 16:03:21.632213 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.632233 kubelet[2632]: W0213 16:03:21.632227 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.632365 kubelet[2632]: E0213 16:03:21.632245 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.632482 kubelet[2632]: E0213 16:03:21.632464 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.632482 kubelet[2632]: W0213 16:03:21.632478 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.632658 kubelet[2632]: E0213 16:03:21.632495 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.632771 kubelet[2632]: E0213 16:03:21.632748 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.632771 kubelet[2632]: W0213 16:03:21.632766 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.632927 kubelet[2632]: E0213 16:03:21.632785 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.633010 kubelet[2632]: E0213 16:03:21.632999 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.633090 kubelet[2632]: W0213 16:03:21.633010 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.633090 kubelet[2632]: E0213 16:03:21.633027 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.633252 kubelet[2632]: E0213 16:03:21.633232 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.633252 kubelet[2632]: W0213 16:03:21.633246 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.633378 kubelet[2632]: E0213 16:03:21.633264 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.633491 kubelet[2632]: E0213 16:03:21.633473 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.633491 kubelet[2632]: W0213 16:03:21.633487 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.633625 kubelet[2632]: E0213 16:03:21.633505 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.729543 kubelet[2632]: E0213 16:03:21.729402 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.729543 kubelet[2632]: W0213 16:03:21.729428 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.729543 kubelet[2632]: E0213 16:03:21.729454 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.730735 kubelet[2632]: E0213 16:03:21.729794 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.730735 kubelet[2632]: W0213 16:03:21.729809 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.730735 kubelet[2632]: E0213 16:03:21.729836 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.730735 kubelet[2632]: E0213 16:03:21.730273 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.730735 kubelet[2632]: W0213 16:03:21.730284 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.730735 kubelet[2632]: E0213 16:03:21.730415 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.731074 kubelet[2632]: E0213 16:03:21.730800 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.731074 kubelet[2632]: W0213 16:03:21.730812 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.731074 kubelet[2632]: E0213 16:03:21.730832 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.731074 kubelet[2632]: E0213 16:03:21.731052 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.731074 kubelet[2632]: W0213 16:03:21.731062 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731147 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731314 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.732581 kubelet[2632]: W0213 16:03:21.731324 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731343 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731660 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.732581 kubelet[2632]: W0213 16:03:21.731670 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731697 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731887 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.732581 kubelet[2632]: W0213 16:03:21.731896 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.732581 kubelet[2632]: E0213 16:03:21.731919 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732137 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.733073 kubelet[2632]: W0213 16:03:21.732146 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732163 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732587 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.733073 kubelet[2632]: W0213 16:03:21.732598 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732693 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732852 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.733073 kubelet[2632]: W0213 16:03:21.732862 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.733073 kubelet[2632]: E0213 16:03:21.732880 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.733562 kubelet[2632]: E0213 16:03:21.733202 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:03:21.733562 kubelet[2632]: W0213 16:03:21.733212 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:03:21.733562 kubelet[2632]: E0213 16:03:21.733227 2632 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:03:21.871707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169539879.mount: Deactivated successfully. Feb 13 16:03:22.030020 containerd[1702]: time="2025-02-13T16:03:22.029843188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:22.034802 containerd[1702]: time="2025-02-13T16:03:22.034729415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 16:03:22.041106 containerd[1702]: time="2025-02-13T16:03:22.041072079Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:22.046502 containerd[1702]: time="2025-02-13T16:03:22.046447819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:22.047607 containerd[1702]: time="2025-02-13T16:03:22.047106536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.727066696s" Feb 13 16:03:22.047607 containerd[1702]: time="2025-02-13T16:03:22.047145537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 16:03:22.049381 containerd[1702]: time="2025-02-13T16:03:22.049354494Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 16:03:22.102680 containerd[1702]: time="2025-02-13T16:03:22.102626776Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e\"" Feb 13 16:03:22.103476 containerd[1702]: time="2025-02-13T16:03:22.103270693Z" level=info msg="StartContainer for \"32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e\"" Feb 13 16:03:22.139710 systemd[1]: Started cri-containerd-32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e.scope - libcontainer container 32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e. Feb 13 16:03:22.178844 systemd[1]: cri-containerd-32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e.scope: Deactivated successfully. Feb 13 16:03:22.182681 containerd[1702]: time="2025-02-13T16:03:22.182603650Z" level=info msg="StartContainer for \"32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e\" returns successfully" Feb 13 16:03:22.458370 kubelet[2632]: E0213 16:03:22.458213 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:22.836511 systemd[1]: run-containerd-runc-k8s.io-32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e-runc.RtNyHW.mount: Deactivated successfully. Feb 13 16:03:22.836706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e-rootfs.mount: Deactivated successfully. Feb 13 16:03:23.077034 containerd[1702]: time="2025-02-13T16:03:23.076948848Z" level=info msg="shim disconnected" id=32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e namespace=k8s.io Feb 13 16:03:23.077034 containerd[1702]: time="2025-02-13T16:03:23.077023650Z" level=warning msg="cleaning up after shim disconnected" id=32faaf9fa51dc608e85411f6024aaf2b6dcdb0a13ddf2d91967eb803ee55f79e namespace=k8s.io Feb 13 16:03:23.077034 containerd[1702]: time="2025-02-13T16:03:23.077036450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:03:23.458433 kubelet[2632]: E0213 16:03:23.458364 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:23.556075 kubelet[2632]: E0213 16:03:23.555979 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:23.582482 containerd[1702]: time="2025-02-13T16:03:23.582355765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 16:03:24.459304 kubelet[2632]: E0213 16:03:24.459229 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:25.460234 kubelet[2632]: E0213 16:03:25.460172 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:25.555705 kubelet[2632]: E0213 16:03:25.555642 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:26.461002 kubelet[2632]: E0213 16:03:26.460956 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:27.461689 kubelet[2632]: E0213 16:03:27.461612 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:27.555862 kubelet[2632]: E0213 16:03:27.555439 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:27.594124 containerd[1702]: time="2025-02-13T16:03:27.594063522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:27.597853 containerd[1702]: time="2025-02-13T16:03:27.597788024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 16:03:27.601498 containerd[1702]: time="2025-02-13T16:03:27.601445323Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:27.605711 containerd[1702]: time="2025-02-13T16:03:27.605654238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:27.606518 containerd[1702]: time="2025-02-13T16:03:27.606382258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.023974792s" Feb 13 16:03:27.606518 containerd[1702]: time="2025-02-13T16:03:27.606418159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 16:03:27.608417 containerd[1702]: time="2025-02-13T16:03:27.608390313Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:03:27.653757 containerd[1702]: time="2025-02-13T16:03:27.653696746Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e\"" Feb 13 16:03:27.654441 containerd[1702]: time="2025-02-13T16:03:27.654356864Z" level=info msg="StartContainer for \"d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e\"" Feb 13 16:03:27.694713 systemd[1]: Started cri-containerd-d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e.scope - libcontainer container d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e. Feb 13 16:03:27.728908 containerd[1702]: time="2025-02-13T16:03:27.727418454Z" level=info msg="StartContainer for \"d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e\" returns successfully" Feb 13 16:03:28.463423 kubelet[2632]: E0213 16:03:28.463364 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:29.125167 containerd[1702]: time="2025-02-13T16:03:29.125061019Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:03:29.128060 systemd[1]: cri-containerd-d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e.scope: Deactivated successfully. Feb 13 16:03:29.148975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e-rootfs.mount: Deactivated successfully. Feb 13 16:03:29.199843 kubelet[2632]: I0213 16:03:29.199809 2632 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:03:29.658973 kubelet[2632]: E0213 16:03:29.464014 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:29.561629 systemd[1]: Created slice kubepods-besteffort-poda471907f_254d_4cb5_b3e1_26f5a10be156.slice - libcontainer container kubepods-besteffort-poda471907f_254d_4cb5_b3e1_26f5a10be156.slice. Feb 13 16:03:29.662616 containerd[1702]: time="2025-02-13T16:03:29.661346924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:0,}" Feb 13 16:03:30.464230 kubelet[2632]: E0213 16:03:30.464166 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:30.734108 kubelet[2632]: I0213 16:03:30.733942 2632 topology_manager.go:215] "Topology Admit Handler" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" podNamespace="default" podName="nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:30.741939 systemd[1]: Created slice kubepods-besteffort-pod3fac037c_f117_4e6d_ad0c_8c56076999ef.slice - libcontainer container kubepods-besteffort-pod3fac037c_f117_4e6d_ad0c_8c56076999ef.slice. Feb 13 16:03:30.770705 containerd[1702]: time="2025-02-13T16:03:30.770621729Z" level=info msg="shim disconnected" id=d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e namespace=k8s.io Feb 13 16:03:30.770705 containerd[1702]: time="2025-02-13T16:03:30.770680431Z" level=warning msg="cleaning up after shim disconnected" id=d558e7a1c29147c8b815c0ba1decbd1da5f884c38606b51c70de9bb6ec439c4e namespace=k8s.io Feb 13 16:03:30.770705 containerd[1702]: time="2025-02-13T16:03:30.770693331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:03:30.835305 containerd[1702]: time="2025-02-13T16:03:30.835246657Z" level=error msg="Failed to destroy network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:30.837555 containerd[1702]: time="2025-02-13T16:03:30.835654567Z" level=error msg="encountered an error cleaning up failed sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:30.837555 containerd[1702]: time="2025-02-13T16:03:30.835743170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:30.837725 kubelet[2632]: E0213 16:03:30.836036 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:30.837725 kubelet[2632]: E0213 16:03:30.836119 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:30.837725 kubelet[2632]: E0213 16:03:30.836148 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:30.837872 kubelet[2632]: E0213 16:03:30.836221 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:30.838431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573-shm.mount: Deactivated successfully. Feb 13 16:03:30.889298 kubelet[2632]: I0213 16:03:30.889237 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtzzc\" (UniqueName: \"kubernetes.io/projected/3fac037c-f117-4e6d-ad0c-8c56076999ef-kube-api-access-mtzzc\") pod \"nginx-deployment-6d5f899847-qrjqj\" (UID: \"3fac037c-f117-4e6d-ad0c-8c56076999ef\") " pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:31.045394 containerd[1702]: time="2025-02-13T16:03:31.045256747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:0,}" Feb 13 16:03:31.134831 containerd[1702]: time="2025-02-13T16:03:31.134772202Z" level=error msg="Failed to destroy network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.135174 containerd[1702]: time="2025-02-13T16:03:31.135136611Z" level=error msg="encountered an error cleaning up failed sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.135255 containerd[1702]: time="2025-02-13T16:03:31.135216113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.135544 kubelet[2632]: E0213 16:03:31.135502 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.135660 kubelet[2632]: E0213 16:03:31.135584 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:31.135660 kubelet[2632]: E0213 16:03:31.135612 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:31.135752 kubelet[2632]: E0213 16:03:31.135695 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:31.464828 kubelet[2632]: E0213 16:03:31.464775 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:31.598378 kubelet[2632]: I0213 16:03:31.597606 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573" Feb 13 16:03:31.598777 containerd[1702]: time="2025-02-13T16:03:31.598713070Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:31.599322 containerd[1702]: time="2025-02-13T16:03:31.599073079Z" level=info msg="Ensure that sandbox 19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573 in task-service has been cleanup successfully" Feb 13 16:03:31.599448 containerd[1702]: time="2025-02-13T16:03:31.599333086Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:31.599448 containerd[1702]: time="2025-02-13T16:03:31.599415288Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:31.600419 containerd[1702]: time="2025-02-13T16:03:31.600066905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:1,}" Feb 13 16:03:31.601885 kubelet[2632]: I0213 16:03:31.601756 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635" Feb 13 16:03:31.601983 containerd[1702]: time="2025-02-13T16:03:31.601852951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 16:03:31.602282 containerd[1702]: time="2025-02-13T16:03:31.602160358Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:31.602425 containerd[1702]: time="2025-02-13T16:03:31.602403565Z" level=info msg="Ensure that sandbox b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635 in task-service has been cleanup successfully" Feb 13 16:03:31.602705 containerd[1702]: time="2025-02-13T16:03:31.602593270Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:31.602705 containerd[1702]: time="2025-02-13T16:03:31.602615370Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:31.603200 containerd[1702]: time="2025-02-13T16:03:31.603172384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:1,}" Feb 13 16:03:31.741799 containerd[1702]: time="2025-02-13T16:03:31.741551933Z" level=error msg="Failed to destroy network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.742457 containerd[1702]: time="2025-02-13T16:03:31.742301953Z" level=error msg="encountered an error cleaning up failed sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.742457 containerd[1702]: time="2025-02-13T16:03:31.742395855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.743397 kubelet[2632]: E0213 16:03:31.743333 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.743822 kubelet[2632]: E0213 16:03:31.743410 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:31.743822 kubelet[2632]: E0213 16:03:31.743434 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:31.743822 kubelet[2632]: E0213 16:03:31.743507 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:31.750191 containerd[1702]: time="2025-02-13T16:03:31.750153554Z" level=error msg="Failed to destroy network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.750486 containerd[1702]: time="2025-02-13T16:03:31.750454862Z" level=error msg="encountered an error cleaning up failed sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.750578 containerd[1702]: time="2025-02-13T16:03:31.750526464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.750839 kubelet[2632]: E0213 16:03:31.750815 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:31.750932 kubelet[2632]: E0213 16:03:31.750877 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:31.750932 kubelet[2632]: E0213 16:03:31.750918 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:31.751039 kubelet[2632]: E0213 16:03:31.751020 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:31.776859 systemd[1]: run-netns-cni\x2d2b45fb75\x2d7e71\x2db542\x2dc0b1\x2d86632e57276c.mount: Deactivated successfully. Feb 13 16:03:31.776974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635-shm.mount: Deactivated successfully. Feb 13 16:03:31.777067 systemd[1]: run-netns-cni\x2d58607d66\x2d2a56\x2dbcce\x2d0235\x2d78806be64229.mount: Deactivated successfully. Feb 13 16:03:32.465991 kubelet[2632]: E0213 16:03:32.465936 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:32.605497 kubelet[2632]: I0213 16:03:32.605454 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e" Feb 13 16:03:32.607140 containerd[1702]: time="2025-02-13T16:03:32.606597719Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:32.607140 containerd[1702]: time="2025-02-13T16:03:32.606973028Z" level=info msg="Ensure that sandbox eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e in task-service has been cleanup successfully" Feb 13 16:03:32.609185 kubelet[2632]: I0213 16:03:32.608182 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca" Feb 13 16:03:32.609305 containerd[1702]: time="2025-02-13T16:03:32.608770174Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:32.609305 containerd[1702]: time="2025-02-13T16:03:32.609019781Z" level=info msg="Ensure that sandbox b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca in task-service has been cleanup successfully" Feb 13 16:03:32.609564 containerd[1702]: time="2025-02-13T16:03:32.609478392Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:32.609735 containerd[1702]: time="2025-02-13T16:03:32.609666097Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:32.610209 containerd[1702]: time="2025-02-13T16:03:32.610101708Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:32.610838 containerd[1702]: time="2025-02-13T16:03:32.610371315Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:32.610838 containerd[1702]: time="2025-02-13T16:03:32.610411916Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:32.611259 containerd[1702]: time="2025-02-13T16:03:32.611127135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:2,}" Feb 13 16:03:32.612319 containerd[1702]: time="2025-02-13T16:03:32.612288765Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:32.612319 containerd[1702]: time="2025-02-13T16:03:32.612313365Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:32.613157 containerd[1702]: time="2025-02-13T16:03:32.612630573Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:32.613157 containerd[1702]: time="2025-02-13T16:03:32.612727076Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:32.613157 containerd[1702]: time="2025-02-13T16:03:32.612743476Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:32.613262 systemd[1]: run-netns-cni\x2d66589725\x2d9b9e\x2d7b09\x2d37f9\x2d383e24939229.mount: Deactivated successfully. Feb 13 16:03:32.613434 systemd[1]: run-netns-cni\x2db2b9000f\x2df61b\x2d2ba9\x2d9b76\x2df11142f7a3a3.mount: Deactivated successfully. Feb 13 16:03:32.614669 containerd[1702]: time="2025-02-13T16:03:32.614521122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:2,}" Feb 13 16:03:32.794281 containerd[1702]: time="2025-02-13T16:03:32.792069375Z" level=error msg="Failed to destroy network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.794281 containerd[1702]: time="2025-02-13T16:03:32.792474386Z" level=error msg="encountered an error cleaning up failed sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.794281 containerd[1702]: time="2025-02-13T16:03:32.792588389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.794643 kubelet[2632]: E0213 16:03:32.794541 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.794643 kubelet[2632]: E0213 16:03:32.794605 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:32.794643 kubelet[2632]: E0213 16:03:32.794641 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:32.796166 kubelet[2632]: E0213 16:03:32.795937 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:32.796449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd-shm.mount: Deactivated successfully. Feb 13 16:03:32.816184 containerd[1702]: time="2025-02-13T16:03:32.816121892Z" level=error msg="Failed to destroy network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.816839 containerd[1702]: time="2025-02-13T16:03:32.816512902Z" level=error msg="encountered an error cleaning up failed sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.816839 containerd[1702]: time="2025-02-13T16:03:32.816604904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.819935 kubelet[2632]: E0213 16:03:32.818749 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:32.819935 kubelet[2632]: E0213 16:03:32.818828 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:32.819935 kubelet[2632]: E0213 16:03:32.818858 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:32.818912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7-shm.mount: Deactivated successfully. Feb 13 16:03:32.820206 kubelet[2632]: E0213 16:03:32.819261 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:33.466593 kubelet[2632]: E0213 16:03:33.466501 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:33.612064 kubelet[2632]: I0213 16:03:33.612011 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7" Feb 13 16:03:33.615508 containerd[1702]: time="2025-02-13T16:03:33.612966228Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:03:33.615508 containerd[1702]: time="2025-02-13T16:03:33.613425840Z" level=info msg="Ensure that sandbox c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7 in task-service has been cleanup successfully" Feb 13 16:03:33.615508 containerd[1702]: time="2025-02-13T16:03:33.614909678Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:03:33.615508 containerd[1702]: time="2025-02-13T16:03:33.615153884Z" level=info msg="Ensure that sandbox 85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd in task-service has been cleanup successfully" Feb 13 16:03:33.616147 kubelet[2632]: I0213 16:03:33.614262 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd" Feb 13 16:03:33.617668 containerd[1702]: time="2025-02-13T16:03:33.617621448Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:03:33.617668 containerd[1702]: time="2025-02-13T16:03:33.617654648Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:03:33.617999 containerd[1702]: time="2025-02-13T16:03:33.617901455Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:03:33.617999 containerd[1702]: time="2025-02-13T16:03:33.617943756Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:03:33.617999 containerd[1702]: time="2025-02-13T16:03:33.617930555Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:33.618195 containerd[1702]: time="2025-02-13T16:03:33.618061059Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:33.618195 containerd[1702]: time="2025-02-13T16:03:33.618075459Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618396567Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618489270Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618503770Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618582772Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618660474Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:33.619001 containerd[1702]: time="2025-02-13T16:03:33.618674475Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:33.619785 containerd[1702]: time="2025-02-13T16:03:33.619764803Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:33.619814 systemd[1]: run-netns-cni\x2d3fa80b3f\x2d8dda\x2d78b3\x2d6789\x2db63434c6a27d.mount: Deactivated successfully. Feb 13 16:03:33.619991 systemd[1]: run-netns-cni\x2d9c74564e\x2d4c80\x2df546\x2d4e49\x2d594c83bb2b5d.mount: Deactivated successfully. Feb 13 16:03:33.620087 containerd[1702]: time="2025-02-13T16:03:33.620058910Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:33.620087 containerd[1702]: time="2025-02-13T16:03:33.620076411Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:33.620620 containerd[1702]: time="2025-02-13T16:03:33.620388619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:3,}" Feb 13 16:03:33.620620 containerd[1702]: time="2025-02-13T16:03:33.620431120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:3,}" Feb 13 16:03:34.452710 kubelet[2632]: E0213 16:03:34.452641 2632 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:34.466915 kubelet[2632]: E0213 16:03:34.466862 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:34.878564 containerd[1702]: time="2025-02-13T16:03:34.877946070Z" level=error msg="Failed to destroy network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.879057 containerd[1702]: time="2025-02-13T16:03:34.878568186Z" level=error msg="encountered an error cleaning up failed sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.879057 containerd[1702]: time="2025-02-13T16:03:34.878647588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.879198 kubelet[2632]: E0213 16:03:34.878981 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.879198 kubelet[2632]: E0213 16:03:34.879046 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:34.879198 kubelet[2632]: E0213 16:03:34.879081 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:34.879350 kubelet[2632]: E0213 16:03:34.879151 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:34.893582 containerd[1702]: time="2025-02-13T16:03:34.893417367Z" level=error msg="Failed to destroy network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.894029 containerd[1702]: time="2025-02-13T16:03:34.893992882Z" level=error msg="encountered an error cleaning up failed sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.894215 containerd[1702]: time="2025-02-13T16:03:34.894191087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.894550 kubelet[2632]: E0213 16:03:34.894484 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:34.895837 kubelet[2632]: E0213 16:03:34.895672 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:34.895837 kubelet[2632]: E0213 16:03:34.895718 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:34.896271 kubelet[2632]: E0213 16:03:34.895972 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:35.468148 kubelet[2632]: E0213 16:03:35.468098 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:35.622087 kubelet[2632]: I0213 16:03:35.621407 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50" Feb 13 16:03:35.624045 containerd[1702]: time="2025-02-13T16:03:35.623515491Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:03:35.624194 containerd[1702]: time="2025-02-13T16:03:35.624048305Z" level=info msg="Ensure that sandbox 72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50 in task-service has been cleanup successfully" Feb 13 16:03:35.624728 containerd[1702]: time="2025-02-13T16:03:35.624697122Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:03:35.624728 containerd[1702]: time="2025-02-13T16:03:35.624725422Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:03:35.625361 containerd[1702]: time="2025-02-13T16:03:35.624989429Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:03:35.625361 containerd[1702]: time="2025-02-13T16:03:35.625077231Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:03:35.625361 containerd[1702]: time="2025-02-13T16:03:35.625091832Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:03:35.625361 containerd[1702]: time="2025-02-13T16:03:35.625330938Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:35.625689 containerd[1702]: time="2025-02-13T16:03:35.625411140Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:35.625689 containerd[1702]: time="2025-02-13T16:03:35.625427140Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:35.627273 containerd[1702]: time="2025-02-13T16:03:35.626810176Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:35.627347 containerd[1702]: time="2025-02-13T16:03:35.627049882Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:35.627347 containerd[1702]: time="2025-02-13T16:03:35.627287588Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:35.627802 containerd[1702]: time="2025-02-13T16:03:35.627774101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:4,}" Feb 13 16:03:35.632331 kubelet[2632]: I0213 16:03:35.631693 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12" Feb 13 16:03:35.633337 containerd[1702]: time="2025-02-13T16:03:35.633313643Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:03:35.633660 containerd[1702]: time="2025-02-13T16:03:35.633637451Z" level=info msg="Ensure that sandbox 5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12 in task-service has been cleanup successfully" Feb 13 16:03:35.633868 containerd[1702]: time="2025-02-13T16:03:35.633851156Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:03:35.633950 containerd[1702]: time="2025-02-13T16:03:35.633926058Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:03:35.638307 containerd[1702]: time="2025-02-13T16:03:35.638281270Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:03:35.638383 containerd[1702]: time="2025-02-13T16:03:35.638372272Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:03:35.638434 containerd[1702]: time="2025-02-13T16:03:35.638387073Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:03:35.640049 containerd[1702]: time="2025-02-13T16:03:35.639655905Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:35.640049 containerd[1702]: time="2025-02-13T16:03:35.639745708Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:35.640049 containerd[1702]: time="2025-02-13T16:03:35.639759208Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:35.640049 containerd[1702]: time="2025-02-13T16:03:35.640014214Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:35.640254 containerd[1702]: time="2025-02-13T16:03:35.640096317Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:35.640254 containerd[1702]: time="2025-02-13T16:03:35.640110217Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:35.641249 containerd[1702]: time="2025-02-13T16:03:35.641082842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:4,}" Feb 13 16:03:35.682973 systemd[1]: run-netns-cni\x2dd6bf82f5\x2d77ba\x2d1579\x2d018f\x2d203629430759.mount: Deactivated successfully. Feb 13 16:03:35.683476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12-shm.mount: Deactivated successfully. Feb 13 16:03:35.683582 systemd[1]: run-netns-cni\x2d3c58aed0\x2db69b\x2d4882\x2dee9c\x2d7089cc0d5357.mount: Deactivated successfully. Feb 13 16:03:35.683663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50-shm.mount: Deactivated successfully. Feb 13 16:03:35.800638 containerd[1702]: time="2025-02-13T16:03:35.799972617Z" level=error msg="Failed to destroy network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.805428 containerd[1702]: time="2025-02-13T16:03:35.805378655Z" level=error msg="encountered an error cleaning up failed sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.805587 containerd[1702]: time="2025-02-13T16:03:35.805469558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.806879 kubelet[2632]: E0213 16:03:35.806051 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.806879 kubelet[2632]: E0213 16:03:35.806137 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:35.806879 kubelet[2632]: E0213 16:03:35.806170 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:35.806091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f-shm.mount: Deactivated successfully. Feb 13 16:03:35.807480 kubelet[2632]: E0213 16:03:35.806246 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:35.837337 containerd[1702]: time="2025-02-13T16:03:35.836989466Z" level=error msg="Failed to destroy network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.838096 containerd[1702]: time="2025-02-13T16:03:35.837907790Z" level=error msg="encountered an error cleaning up failed sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.838096 containerd[1702]: time="2025-02-13T16:03:35.837985092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.838877 kubelet[2632]: E0213 16:03:35.838404 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:35.838877 kubelet[2632]: E0213 16:03:35.838464 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:35.838877 kubelet[2632]: E0213 16:03:35.838490 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:35.839401 kubelet[2632]: E0213 16:03:35.839138 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:35.840610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2-shm.mount: Deactivated successfully. Feb 13 16:03:36.469448 kubelet[2632]: E0213 16:03:36.469389 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:36.637593 kubelet[2632]: I0213 16:03:36.637053 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f" Feb 13 16:03:36.638402 containerd[1702]: time="2025-02-13T16:03:36.638362418Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:03:36.639430 containerd[1702]: time="2025-02-13T16:03:36.638626725Z" level=info msg="Ensure that sandbox dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f in task-service has been cleanup successfully" Feb 13 16:03:36.639430 containerd[1702]: time="2025-02-13T16:03:36.639387545Z" level=info msg="TearDown network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" successfully" Feb 13 16:03:36.639430 containerd[1702]: time="2025-02-13T16:03:36.639409245Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" returns successfully" Feb 13 16:03:36.640083 containerd[1702]: time="2025-02-13T16:03:36.639842856Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:03:36.640083 containerd[1702]: time="2025-02-13T16:03:36.639935559Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:03:36.640083 containerd[1702]: time="2025-02-13T16:03:36.639948459Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:03:36.640714 containerd[1702]: time="2025-02-13T16:03:36.640674578Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:03:36.641083 containerd[1702]: time="2025-02-13T16:03:36.641060788Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:03:36.641382 containerd[1702]: time="2025-02-13T16:03:36.641303594Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:03:36.641812 containerd[1702]: time="2025-02-13T16:03:36.641585101Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:36.641812 containerd[1702]: time="2025-02-13T16:03:36.641668303Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:36.641812 containerd[1702]: time="2025-02-13T16:03:36.641683104Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:36.642292 containerd[1702]: time="2025-02-13T16:03:36.642131715Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:36.642292 containerd[1702]: time="2025-02-13T16:03:36.642215917Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:36.642292 containerd[1702]: time="2025-02-13T16:03:36.642231518Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:36.643812 containerd[1702]: time="2025-02-13T16:03:36.643781757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:5,}" Feb 13 16:03:36.647260 kubelet[2632]: I0213 16:03:36.647152 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2" Feb 13 16:03:36.647814 containerd[1702]: time="2025-02-13T16:03:36.647742059Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:03:36.648156 containerd[1702]: time="2025-02-13T16:03:36.648118969Z" level=info msg="Ensure that sandbox 076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2 in task-service has been cleanup successfully" Feb 13 16:03:36.648292 containerd[1702]: time="2025-02-13T16:03:36.648272473Z" level=info msg="TearDown network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" successfully" Feb 13 16:03:36.648421 containerd[1702]: time="2025-02-13T16:03:36.648291473Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" returns successfully" Feb 13 16:03:36.651967 containerd[1702]: time="2025-02-13T16:03:36.651849064Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:03:36.651967 containerd[1702]: time="2025-02-13T16:03:36.651935666Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:03:36.651967 containerd[1702]: time="2025-02-13T16:03:36.651949067Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:03:36.652482 containerd[1702]: time="2025-02-13T16:03:36.652341877Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:03:36.652482 containerd[1702]: time="2025-02-13T16:03:36.652430379Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:03:36.652482 containerd[1702]: time="2025-02-13T16:03:36.652444279Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:03:36.653004 containerd[1702]: time="2025-02-13T16:03:36.652902591Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:36.653004 containerd[1702]: time="2025-02-13T16:03:36.652984693Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:36.653004 containerd[1702]: time="2025-02-13T16:03:36.652997094Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:36.656677 containerd[1702]: time="2025-02-13T16:03:36.656650387Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:36.656766 containerd[1702]: time="2025-02-13T16:03:36.656736590Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:36.656766 containerd[1702]: time="2025-02-13T16:03:36.656752990Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:36.657393 containerd[1702]: time="2025-02-13T16:03:36.657274803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:5,}" Feb 13 16:03:36.680791 systemd[1]: run-netns-cni\x2da0726557\x2d26b2\x2d7f66\x2d828f\x2da3a4271c9274.mount: Deactivated successfully. Feb 13 16:03:36.680917 systemd[1]: run-netns-cni\x2daa6d8d2d\x2d0a86\x2d18ce\x2d5df5\x2d0ed04c4cfa52.mount: Deactivated successfully. Feb 13 16:03:36.874088 containerd[1702]: time="2025-02-13T16:03:36.874016162Z" level=error msg="Failed to destroy network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.874808 containerd[1702]: time="2025-02-13T16:03:36.874762981Z" level=error msg="encountered an error cleaning up failed sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.875040 containerd[1702]: time="2025-02-13T16:03:36.875010387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.875673 kubelet[2632]: E0213 16:03:36.875644 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.875892 kubelet[2632]: E0213 16:03:36.875871 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:36.876007 kubelet[2632]: E0213 16:03:36.875997 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:36.876927 kubelet[2632]: E0213 16:03:36.876889 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:36.878635 containerd[1702]: time="2025-02-13T16:03:36.878595079Z" level=error msg="Failed to destroy network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.878991 containerd[1702]: time="2025-02-13T16:03:36.878953789Z" level=error msg="encountered an error cleaning up failed sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.879971 containerd[1702]: time="2025-02-13T16:03:36.879576405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.880058 kubelet[2632]: E0213 16:03:36.879791 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:36.880058 kubelet[2632]: E0213 16:03:36.879845 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:36.880058 kubelet[2632]: E0213 16:03:36.879875 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:36.880203 kubelet[2632]: E0213 16:03:36.879941 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:37.470976 kubelet[2632]: E0213 16:03:37.470836 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:37.656200 kubelet[2632]: I0213 16:03:37.655662 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498" Feb 13 16:03:37.656931 containerd[1702]: time="2025-02-13T16:03:37.656699435Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" Feb 13 16:03:37.657405 containerd[1702]: time="2025-02-13T16:03:37.656958442Z" level=info msg="Ensure that sandbox 119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498 in task-service has been cleanup successfully" Feb 13 16:03:37.657405 containerd[1702]: time="2025-02-13T16:03:37.657229449Z" level=info msg="TearDown network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" successfully" Feb 13 16:03:37.657405 containerd[1702]: time="2025-02-13T16:03:37.657249649Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" returns successfully" Feb 13 16:03:37.658775 containerd[1702]: time="2025-02-13T16:03:37.658587883Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:03:37.658775 containerd[1702]: time="2025-02-13T16:03:37.658678386Z" level=info msg="TearDown network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" successfully" Feb 13 16:03:37.658775 containerd[1702]: time="2025-02-13T16:03:37.658693386Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" returns successfully" Feb 13 16:03:37.659711 containerd[1702]: time="2025-02-13T16:03:37.659546808Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:03:37.659711 containerd[1702]: time="2025-02-13T16:03:37.659644110Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:03:37.659711 containerd[1702]: time="2025-02-13T16:03:37.659659111Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:03:37.660881 containerd[1702]: time="2025-02-13T16:03:37.660806740Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:03:37.661452 containerd[1702]: time="2025-02-13T16:03:37.661147449Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:03:37.661452 containerd[1702]: time="2025-02-13T16:03:37.661167550Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:03:37.661662 containerd[1702]: time="2025-02-13T16:03:37.661561460Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:37.661662 containerd[1702]: time="2025-02-13T16:03:37.661656662Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:37.661742 containerd[1702]: time="2025-02-13T16:03:37.661670362Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:37.662556 containerd[1702]: time="2025-02-13T16:03:37.662384881Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:37.662556 containerd[1702]: time="2025-02-13T16:03:37.662467383Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:37.662556 containerd[1702]: time="2025-02-13T16:03:37.662480683Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:37.663123 containerd[1702]: time="2025-02-13T16:03:37.663075498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:6,}" Feb 13 16:03:37.667893 kubelet[2632]: I0213 16:03:37.667678 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df" Feb 13 16:03:37.669028 containerd[1702]: time="2025-02-13T16:03:37.669001950Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" Feb 13 16:03:37.669367 containerd[1702]: time="2025-02-13T16:03:37.669227656Z" level=info msg="Ensure that sandbox 9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df in task-service has been cleanup successfully" Feb 13 16:03:37.669431 containerd[1702]: time="2025-02-13T16:03:37.669381060Z" level=info msg="TearDown network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" successfully" Feb 13 16:03:37.669431 containerd[1702]: time="2025-02-13T16:03:37.669395761Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" returns successfully" Feb 13 16:03:37.671106 containerd[1702]: time="2025-02-13T16:03:37.671078704Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:03:37.671436 containerd[1702]: time="2025-02-13T16:03:37.671188006Z" level=info msg="TearDown network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" successfully" Feb 13 16:03:37.671436 containerd[1702]: time="2025-02-13T16:03:37.671206907Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" returns successfully" Feb 13 16:03:37.671698 containerd[1702]: time="2025-02-13T16:03:37.671678719Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:03:37.672035 containerd[1702]: time="2025-02-13T16:03:37.671759021Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:03:37.672035 containerd[1702]: time="2025-02-13T16:03:37.671776822Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:03:37.672146 containerd[1702]: time="2025-02-13T16:03:37.672088930Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:03:37.672195 containerd[1702]: time="2025-02-13T16:03:37.672167432Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:03:37.672195 containerd[1702]: time="2025-02-13T16:03:37.672181532Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:03:37.672743 containerd[1702]: time="2025-02-13T16:03:37.672558042Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:37.672743 containerd[1702]: time="2025-02-13T16:03:37.672643744Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:37.672743 containerd[1702]: time="2025-02-13T16:03:37.672656344Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:37.673473 containerd[1702]: time="2025-02-13T16:03:37.673129956Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:37.673473 containerd[1702]: time="2025-02-13T16:03:37.673216659Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:37.673473 containerd[1702]: time="2025-02-13T16:03:37.673230559Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:37.674150 containerd[1702]: time="2025-02-13T16:03:37.673719071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:6,}" Feb 13 16:03:37.680771 systemd[1]: run-netns-cni\x2d565e2463\x2d1dce\x2d1b20\x2d5292\x2dfbefeb52f0e9.mount: Deactivated successfully. Feb 13 16:03:37.680900 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df-shm.mount: Deactivated successfully. Feb 13 16:03:37.680983 systemd[1]: run-netns-cni\x2d604993f4\x2ddd50\x2d99ed\x2d345b\x2d4627e29fc18d.mount: Deactivated successfully. Feb 13 16:03:37.681053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498-shm.mount: Deactivated successfully. Feb 13 16:03:37.873998 containerd[1702]: time="2025-02-13T16:03:37.873790902Z" level=error msg="Failed to destroy network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.875552 containerd[1702]: time="2025-02-13T16:03:37.874464920Z" level=error msg="encountered an error cleaning up failed sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.875552 containerd[1702]: time="2025-02-13T16:03:37.875285341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.876662 kubelet[2632]: E0213 16:03:37.876636 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.876932 kubelet[2632]: E0213 16:03:37.876918 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:37.877127 kubelet[2632]: E0213 16:03:37.877113 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qrjqj" Feb 13 16:03:37.877296 kubelet[2632]: E0213 16:03:37.877282 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qrjqj_default(3fac037c-f117-4e6d-ad0c-8c56076999ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qrjqj" podUID="3fac037c-f117-4e6d-ad0c-8c56076999ef" Feb 13 16:03:37.904696 containerd[1702]: time="2025-02-13T16:03:37.904635394Z" level=error msg="Failed to destroy network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.905239 containerd[1702]: time="2025-02-13T16:03:37.905195508Z" level=error msg="encountered an error cleaning up failed sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.905429 containerd[1702]: time="2025-02-13T16:03:37.905400513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.905796 kubelet[2632]: E0213 16:03:37.905769 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:03:37.906081 kubelet[2632]: E0213 16:03:37.906061 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:37.906214 kubelet[2632]: E0213 16:03:37.906202 2632 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7djmb" Feb 13 16:03:37.906704 kubelet[2632]: E0213 16:03:37.906675 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7djmb_calico-system(a471907f-254d-4cb5-b3e1-26f5a10be156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7djmb" podUID="a471907f-254d-4cb5-b3e1-26f5a10be156" Feb 13 16:03:38.255517 containerd[1702]: time="2025-02-13T16:03:38.254315962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:38.257927 containerd[1702]: time="2025-02-13T16:03:38.257862353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 16:03:38.261168 containerd[1702]: time="2025-02-13T16:03:38.261099536Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:38.265062 containerd[1702]: time="2025-02-13T16:03:38.265032336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:38.265739 containerd[1702]: time="2025-02-13T16:03:38.265573950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.663515594s" Feb 13 16:03:38.265739 containerd[1702]: time="2025-02-13T16:03:38.265612851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 16:03:38.272827 containerd[1702]: time="2025-02-13T16:03:38.272795335Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 16:03:38.310866 containerd[1702]: time="2025-02-13T16:03:38.310812610Z" level=info msg="CreateContainer within sandbox \"1d94a7c08c7615412c8ffee4f7923aae9466c617a4ec5f4ab186c45699688283\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0\"" Feb 13 16:03:38.311559 containerd[1702]: time="2025-02-13T16:03:38.311452227Z" level=info msg="StartContainer for \"15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0\"" Feb 13 16:03:38.339727 systemd[1]: Started cri-containerd-15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0.scope - libcontainer container 15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0. Feb 13 16:03:38.374041 containerd[1702]: time="2025-02-13T16:03:38.373936129Z" level=info msg="StartContainer for \"15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0\" returns successfully" Feb 13 16:03:38.471404 kubelet[2632]: E0213 16:03:38.471297 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:38.610881 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 16:03:38.611026 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 16:03:38.673258 kubelet[2632]: I0213 16:03:38.673219 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401" Feb 13 16:03:38.674445 containerd[1702]: time="2025-02-13T16:03:38.674037626Z" level=info msg="StopPodSandbox for \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\"" Feb 13 16:03:38.674445 containerd[1702]: time="2025-02-13T16:03:38.674294432Z" level=info msg="Ensure that sandbox 9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401 in task-service has been cleanup successfully" Feb 13 16:03:38.675108 containerd[1702]: time="2025-02-13T16:03:38.675009251Z" level=info msg="TearDown network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" successfully" Feb 13 16:03:38.675108 containerd[1702]: time="2025-02-13T16:03:38.675046952Z" level=info msg="StopPodSandbox for \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" returns successfully" Feb 13 16:03:38.675842 containerd[1702]: time="2025-02-13T16:03:38.675586566Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" Feb 13 16:03:38.675842 containerd[1702]: time="2025-02-13T16:03:38.675705769Z" level=info msg="TearDown network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" successfully" Feb 13 16:03:38.675842 containerd[1702]: time="2025-02-13T16:03:38.675721269Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" returns successfully" Feb 13 16:03:38.676958 containerd[1702]: time="2025-02-13T16:03:38.676677794Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:03:38.679555 containerd[1702]: time="2025-02-13T16:03:38.677440913Z" level=info msg="TearDown network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" successfully" Feb 13 16:03:38.679555 containerd[1702]: time="2025-02-13T16:03:38.677600817Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" returns successfully" Feb 13 16:03:38.680373 containerd[1702]: time="2025-02-13T16:03:38.680023279Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:03:38.680373 containerd[1702]: time="2025-02-13T16:03:38.680130482Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:03:38.680373 containerd[1702]: time="2025-02-13T16:03:38.680144282Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:03:38.680914 containerd[1702]: time="2025-02-13T16:03:38.680730297Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:03:38.680914 containerd[1702]: time="2025-02-13T16:03:38.680827800Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:03:38.680914 containerd[1702]: time="2025-02-13T16:03:38.680843300Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:03:38.682590 containerd[1702]: time="2025-02-13T16:03:38.681354714Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:03:38.682590 containerd[1702]: time="2025-02-13T16:03:38.682475842Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:03:38.682590 containerd[1702]: time="2025-02-13T16:03:38.682495243Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:03:38.684557 kubelet[2632]: I0213 16:03:38.682786 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.683241362Z" level=info msg="StopPodSandbox for \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\"" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.683442467Z" level=info msg="Ensure that sandbox 0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164 in task-service has been cleanup successfully" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.683773276Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.683855978Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.683868978Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:03:38.684647 containerd[1702]: time="2025-02-13T16:03:38.684514795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:7,}" Feb 13 16:03:38.685792 containerd[1702]: time="2025-02-13T16:03:38.685621323Z" level=info msg="TearDown network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" successfully" Feb 13 16:03:38.685792 containerd[1702]: time="2025-02-13T16:03:38.685646624Z" level=info msg="StopPodSandbox for \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" returns successfully" Feb 13 16:03:38.686051 containerd[1702]: time="2025-02-13T16:03:38.686030033Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" Feb 13 16:03:38.686210 containerd[1702]: time="2025-02-13T16:03:38.686191738Z" level=info msg="TearDown network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" successfully" Feb 13 16:03:38.686332 containerd[1702]: time="2025-02-13T16:03:38.686276740Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" returns successfully" Feb 13 16:03:38.687288 containerd[1702]: time="2025-02-13T16:03:38.687229864Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:03:38.687390 containerd[1702]: time="2025-02-13T16:03:38.687371268Z" level=info msg="TearDown network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" successfully" Feb 13 16:03:38.687448 containerd[1702]: time="2025-02-13T16:03:38.687391868Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" returns successfully" Feb 13 16:03:38.687845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164-shm.mount: Deactivated successfully. Feb 13 16:03:38.689201 containerd[1702]: time="2025-02-13T16:03:38.687863880Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:03:38.689201 containerd[1702]: time="2025-02-13T16:03:38.688231390Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:03:38.689201 containerd[1702]: time="2025-02-13T16:03:38.688250090Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:03:38.687970 systemd[1]: run-netns-cni\x2d2682ea9b\x2d0b9b\x2d1610\x2d6460\x2da1148926c0a1.mount: Deactivated successfully. Feb 13 16:03:38.688058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401-shm.mount: Deactivated successfully. Feb 13 16:03:38.688145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096556304.mount: Deactivated successfully. Feb 13 16:03:38.690651 containerd[1702]: time="2025-02-13T16:03:38.690205741Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:03:38.690651 containerd[1702]: time="2025-02-13T16:03:38.690297843Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:03:38.690651 containerd[1702]: time="2025-02-13T16:03:38.690314743Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:03:38.692933 containerd[1702]: time="2025-02-13T16:03:38.692900910Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:03:38.693278 containerd[1702]: time="2025-02-13T16:03:38.693248619Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:03:38.693341 containerd[1702]: time="2025-02-13T16:03:38.693316520Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:03:38.695391 systemd[1]: run-netns-cni\x2d6621d3ad\x2de020\x2df33e\x2d5d20\x2d2afb40413009.mount: Deactivated successfully. Feb 13 16:03:38.695681 containerd[1702]: time="2025-02-13T16:03:38.695525977Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:03:38.695776 containerd[1702]: time="2025-02-13T16:03:38.695757083Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:03:38.695827 containerd[1702]: time="2025-02-13T16:03:38.695778083Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:03:38.697375 containerd[1702]: time="2025-02-13T16:03:38.697346624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:7,}" Feb 13 16:03:38.723594 kubelet[2632]: I0213 16:03:38.723428 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gpgtm" podStartSLOduration=5.125275317 podStartE2EDuration="24.723363291s" podCreationTimestamp="2025-02-13 16:03:14 +0000 UTC" firstStartedPulling="2025-02-13 16:03:18.667964889 +0000 UTC m=+4.750485981" lastFinishedPulling="2025-02-13 16:03:38.266052963 +0000 UTC m=+24.348573955" observedRunningTime="2025-02-13 16:03:38.723206587 +0000 UTC m=+24.805727679" watchObservedRunningTime="2025-02-13 16:03:38.723363291 +0000 UTC m=+24.805884383" Feb 13 16:03:38.980826 systemd-networkd[1441]: calif5830dd4c5d: Link UP Feb 13 16:03:38.981099 systemd-networkd[1441]: calif5830dd4c5d: Gained carrier Feb 13 16:03:38.991206 systemd-networkd[1441]: cali2fac91a65e1: Link UP Feb 13 16:03:38.991419 systemd-networkd[1441]: cali2fac91a65e1: Gained carrier Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.816 [INFO][3673] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.831 [INFO][3673] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0 nginx-deployment-6d5f899847- default 3fac037c-f117-4e6d-ad0c-8c56076999ef 1276 0 2025-02-13 16:03:30 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.22 nginx-deployment-6d5f899847-qrjqj eth0 default [] [] [kns.default ksa.default.default] calif5830dd4c5d [] []}} ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.832 [INFO][3673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.884 [INFO][3699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" HandleID="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Workload="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.899 [INFO][3699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" HandleID="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Workload="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2b20), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.22", "pod":"nginx-deployment-6d5f899847-qrjqj", "timestamp":"2025-02-13 16:03:38.884259617 +0000 UTC"}, Hostname:"10.200.8.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.899 [INFO][3699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.899 [INFO][3699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.899 [INFO][3699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.22' Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.909 [INFO][3699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.915 [INFO][3699] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.920 [INFO][3699] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.922 [INFO][3699] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.924 [INFO][3699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.924 [INFO][3699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.926 [INFO][3699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3 Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.933 [INFO][3699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.950 [INFO][3699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.65/26] block=192.168.83.64/26 handle="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.950 [INFO][3699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.65/26] handle="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" host="10.200.8.22" Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.950 [INFO][3699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:03:39.012685 containerd[1702]: 2025-02-13 16:03:38.950 [INFO][3699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.65/26] IPv6=[] ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" HandleID="k8s-pod-network.c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Workload="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:38.953 [INFO][3673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3fac037c-f117-4e6d-ad0c-8c56076999ef", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"", Pod:"nginx-deployment-6d5f899847-qrjqj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif5830dd4c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:38.953 [INFO][3673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.65/32] ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:38.953 [INFO][3673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5830dd4c5d ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:38.980 [INFO][3673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:38.982 [INFO][3673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3fac037c-f117-4e6d-ad0c-8c56076999ef", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3", Pod:"nginx-deployment-6d5f899847-qrjqj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif5830dd4c5d", MAC:"96:4e:e1:b1:6e:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:39.013691 containerd[1702]: 2025-02-13 16:03:39.007 [INFO][3673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3" Namespace="default" Pod="nginx-deployment-6d5f899847-qrjqj" WorkloadEndpoint="10.200.8.22-k8s-nginx--deployment--6d5f899847--qrjqj-eth0" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.815 [INFO][3682] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.830 [INFO][3682] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.22-k8s-csi--node--driver--7djmb-eth0 csi-node-driver- calico-system a471907f-254d-4cb5-b3e1-26f5a10be156 1200 0 2025-02-13 16:03:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.8.22 csi-node-driver-7djmb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2fac91a65e1 [] []}} ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.830 [INFO][3682] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.884 [INFO][3695] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" HandleID="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Workload="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.900 [INFO][3695] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" HandleID="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Workload="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003198f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.22", "pod":"csi-node-driver-7djmb", "timestamp":"2025-02-13 16:03:38.884006711 +0000 UTC"}, Hostname:"10.200.8.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.900 [INFO][3695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.950 [INFO][3695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.951 [INFO][3695] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.22' Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.953 [INFO][3695] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.957 [INFO][3695] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.961 [INFO][3695] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.962 [INFO][3695] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.964 [INFO][3695] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.964 [INFO][3695] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.966 [INFO][3695] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.971 [INFO][3695] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.985 [INFO][3695] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.66/26] block=192.168.83.64/26 handle="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.985 [INFO][3695] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.66/26] handle="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" host="10.200.8.22" Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.985 [INFO][3695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:03:39.014741 containerd[1702]: 2025-02-13 16:03:38.985 [INFO][3695] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.66/26] IPv6=[] ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" HandleID="k8s-pod-network.2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Workload="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:38.987 [INFO][3682] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-csi--node--driver--7djmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a471907f-254d-4cb5-b3e1-26f5a10be156", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"", Pod:"csi-node-driver-7djmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fac91a65e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:38.988 [INFO][3682] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.66/32] ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:38.988 [INFO][3682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fac91a65e1 ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:38.990 [INFO][3682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:38.992 [INFO][3682] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-csi--node--driver--7djmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a471907f-254d-4cb5-b3e1-26f5a10be156", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db", Pod:"csi-node-driver-7djmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fac91a65e1", MAC:"6e:75:b6:f4:9b:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:39.015738 containerd[1702]: 2025-02-13 16:03:39.012 [INFO][3682] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db" Namespace="calico-system" Pod="csi-node-driver-7djmb" WorkloadEndpoint="10.200.8.22-k8s-csi--node--driver--7djmb-eth0" Feb 13 16:03:39.055828 containerd[1702]: time="2025-02-13T16:03:39.055470708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:03:39.056044 containerd[1702]: time="2025-02-13T16:03:39.055846118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:03:39.056044 containerd[1702]: time="2025-02-13T16:03:39.055914620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:39.056867 containerd[1702]: time="2025-02-13T16:03:39.056812043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:39.068062 containerd[1702]: time="2025-02-13T16:03:39.067940028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:03:39.069083 containerd[1702]: time="2025-02-13T16:03:39.068342038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:03:39.069083 containerd[1702]: time="2025-02-13T16:03:39.068383639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:39.069083 containerd[1702]: time="2025-02-13T16:03:39.068549344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:39.085029 systemd[1]: Started cri-containerd-c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3.scope - libcontainer container c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3. Feb 13 16:03:39.098753 systemd[1]: Started cri-containerd-2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db.scope - libcontainer container 2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db. Feb 13 16:03:39.131272 containerd[1702]: time="2025-02-13T16:03:39.130945744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7djmb,Uid:a471907f-254d-4cb5-b3e1-26f5a10be156,Namespace:calico-system,Attempt:7,} returns sandbox id \"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db\"" Feb 13 16:03:39.137443 containerd[1702]: time="2025-02-13T16:03:39.137409110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 16:03:39.143515 containerd[1702]: time="2025-02-13T16:03:39.143480365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qrjqj,Uid:3fac037c-f117-4e6d-ad0c-8c56076999ef,Namespace:default,Attempt:7,} returns sandbox id \"c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3\"" Feb 13 16:03:39.472353 kubelet[2632]: E0213 16:03:39.472288 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:40.243568 kernel: bpftool[3926]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 16:03:40.472898 kubelet[2632]: E0213 16:03:40.472848 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:40.520635 systemd-networkd[1441]: vxlan.calico: Link UP Feb 13 16:03:40.520647 systemd-networkd[1441]: vxlan.calico: Gained carrier Feb 13 16:03:40.898655 systemd-networkd[1441]: calif5830dd4c5d: Gained IPv6LL Feb 13 16:03:40.899294 systemd-networkd[1441]: cali2fac91a65e1: Gained IPv6LL Feb 13 16:03:40.944704 containerd[1702]: time="2025-02-13T16:03:40.944647611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:40.949773 containerd[1702]: time="2025-02-13T16:03:40.949721439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 16:03:40.955852 containerd[1702]: time="2025-02-13T16:03:40.955759792Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:40.966621 containerd[1702]: time="2025-02-13T16:03:40.966445162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:40.969439 containerd[1702]: time="2025-02-13T16:03:40.968205206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.830599092s" Feb 13 16:03:40.969439 containerd[1702]: time="2025-02-13T16:03:40.968258808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 16:03:40.969945 containerd[1702]: time="2025-02-13T16:03:40.969770946Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:03:40.970803 containerd[1702]: time="2025-02-13T16:03:40.970777571Z" level=info msg="CreateContainer within sandbox \"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 16:03:41.036554 containerd[1702]: time="2025-02-13T16:03:41.034804890Z" level=info msg="CreateContainer within sandbox \"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29\"" Feb 13 16:03:41.036554 containerd[1702]: time="2025-02-13T16:03:41.035960319Z" level=info msg="StartContainer for \"a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29\"" Feb 13 16:03:41.082989 systemd[1]: run-containerd-runc-k8s.io-a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29-runc.WZQSp3.mount: Deactivated successfully. Feb 13 16:03:41.091670 systemd[1]: Started cri-containerd-a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29.scope - libcontainer container a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29. Feb 13 16:03:41.125133 containerd[1702]: time="2025-02-13T16:03:41.125091271Z" level=info msg="StartContainer for \"a4924b9b8f7cfcd2e874e5a81cc6cf3295188671b24cebefa4f6840b776acf29\" returns successfully" Feb 13 16:03:41.473776 kubelet[2632]: E0213 16:03:41.473720 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:42.114839 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Feb 13 16:03:42.474689 kubelet[2632]: E0213 16:03:42.474504 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:42.673374 kubelet[2632]: I0213 16:03:42.673243 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:03:42.694888 systemd[1]: run-containerd-runc-k8s.io-15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0-runc.GQa55s.mount: Deactivated successfully. Feb 13 16:03:42.771999 systemd[1]: run-containerd-runc-k8s.io-15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0-runc.RpeoTo.mount: Deactivated successfully. Feb 13 16:03:43.475051 kubelet[2632]: E0213 16:03:43.475004 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:43.941716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678373933.mount: Deactivated successfully. Feb 13 16:03:44.476030 kubelet[2632]: E0213 16:03:44.475840 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:45.287802 containerd[1702]: time="2025-02-13T16:03:45.287738073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:45.290472 containerd[1702]: time="2025-02-13T16:03:45.290395340Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 16:03:45.294516 containerd[1702]: time="2025-02-13T16:03:45.294450642Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:45.299698 containerd[1702]: time="2025-02-13T16:03:45.299619973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:45.300709 containerd[1702]: time="2025-02-13T16:03:45.300524396Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.330716149s" Feb 13 16:03:45.300709 containerd[1702]: time="2025-02-13T16:03:45.300592198Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:03:45.301978 containerd[1702]: time="2025-02-13T16:03:45.301934532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 16:03:45.302792 containerd[1702]: time="2025-02-13T16:03:45.302754852Z" level=info msg="CreateContainer within sandbox \"c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 16:03:45.348066 containerd[1702]: time="2025-02-13T16:03:45.348017596Z" level=info msg="CreateContainer within sandbox \"c55c41a0dc10fc290f9516ea1b7c9d8f259c46cd0e1978a64e891387591b80d3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"765381d1466939a7c42fb1c0550f385ba9806e9aa7a0b683c1c8d8e86513d1f4\"" Feb 13 16:03:45.348851 containerd[1702]: time="2025-02-13T16:03:45.348651812Z" level=info msg="StartContainer for \"765381d1466939a7c42fb1c0550f385ba9806e9aa7a0b683c1c8d8e86513d1f4\"" Feb 13 16:03:45.385688 systemd[1]: Started cri-containerd-765381d1466939a7c42fb1c0550f385ba9806e9aa7a0b683c1c8d8e86513d1f4.scope - libcontainer container 765381d1466939a7c42fb1c0550f385ba9806e9aa7a0b683c1c8d8e86513d1f4. Feb 13 16:03:45.417332 containerd[1702]: time="2025-02-13T16:03:45.417281247Z" level=info msg="StartContainer for \"765381d1466939a7c42fb1c0550f385ba9806e9aa7a0b683c1c8d8e86513d1f4\" returns successfully" Feb 13 16:03:45.476235 kubelet[2632]: E0213 16:03:45.476186 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:46.476730 kubelet[2632]: E0213 16:03:46.476653 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:46.851377 containerd[1702]: time="2025-02-13T16:03:46.851321889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:46.853694 containerd[1702]: time="2025-02-13T16:03:46.853622947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 16:03:46.856683 containerd[1702]: time="2025-02-13T16:03:46.856623323Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:46.861363 containerd[1702]: time="2025-02-13T16:03:46.861300641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:03:46.862122 containerd[1702]: time="2025-02-13T16:03:46.861961558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.559993825s" Feb 13 16:03:46.862122 containerd[1702]: time="2025-02-13T16:03:46.862004259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 16:03:46.863887 containerd[1702]: time="2025-02-13T16:03:46.863818705Z" level=info msg="CreateContainer within sandbox \"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 16:03:46.904985 containerd[1702]: time="2025-02-13T16:03:46.904932344Z" level=info msg="CreateContainer within sandbox \"2b55b6393709d6ea5d129bd80f8efa2cd5583d259953bdb42b4f88f6c079b6db\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"80fa3b2a8d681562767b56c6f7516b605ea01cfbdc2fa00be50781ddba021336\"" Feb 13 16:03:46.905557 containerd[1702]: time="2025-02-13T16:03:46.905506858Z" level=info msg="StartContainer for \"80fa3b2a8d681562767b56c6f7516b605ea01cfbdc2fa00be50781ddba021336\"" Feb 13 16:03:46.940696 systemd[1]: Started cri-containerd-80fa3b2a8d681562767b56c6f7516b605ea01cfbdc2fa00be50781ddba021336.scope - libcontainer container 80fa3b2a8d681562767b56c6f7516b605ea01cfbdc2fa00be50781ddba021336. Feb 13 16:03:46.971876 containerd[1702]: time="2025-02-13T16:03:46.971299321Z" level=info msg="StartContainer for \"80fa3b2a8d681562767b56c6f7516b605ea01cfbdc2fa00be50781ddba021336\" returns successfully" Feb 13 16:03:47.477734 kubelet[2632]: E0213 16:03:47.477670 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:47.572406 kubelet[2632]: I0213 16:03:47.572353 2632 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 16:03:47.572406 kubelet[2632]: I0213 16:03:47.572401 2632 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 16:03:47.750066 kubelet[2632]: I0213 16:03:47.749928 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7djmb" podStartSLOduration=26.024363206 podStartE2EDuration="33.749889679s" podCreationTimestamp="2025-02-13 16:03:14 +0000 UTC" firstStartedPulling="2025-02-13 16:03:39.136757393 +0000 UTC m=+25.219278385" lastFinishedPulling="2025-02-13 16:03:46.862283666 +0000 UTC m=+32.944804858" observedRunningTime="2025-02-13 16:03:47.749731475 +0000 UTC m=+33.832252567" watchObservedRunningTime="2025-02-13 16:03:47.749889679 +0000 UTC m=+33.832410671" Feb 13 16:03:47.750325 kubelet[2632]: I0213 16:03:47.750084 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-qrjqj" podStartSLOduration=11.593780472 podStartE2EDuration="17.750053583s" podCreationTimestamp="2025-02-13 16:03:30 +0000 UTC" firstStartedPulling="2025-02-13 16:03:39.144909502 +0000 UTC m=+25.227430494" lastFinishedPulling="2025-02-13 16:03:45.301182613 +0000 UTC m=+31.383703605" observedRunningTime="2025-02-13 16:03:45.734996276 +0000 UTC m=+31.817517268" watchObservedRunningTime="2025-02-13 16:03:47.750053583 +0000 UTC m=+33.832574675" Feb 13 16:03:48.478716 kubelet[2632]: E0213 16:03:48.478652 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:49.478963 kubelet[2632]: E0213 16:03:49.478896 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:50.479813 kubelet[2632]: E0213 16:03:50.479735 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:51.480506 kubelet[2632]: E0213 16:03:51.480437 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:52.481663 kubelet[2632]: E0213 16:03:52.481592 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:53.482211 kubelet[2632]: E0213 16:03:53.482135 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:54.452317 kubelet[2632]: E0213 16:03:54.452253 2632 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:54.482646 kubelet[2632]: E0213 16:03:54.482580 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:55.456638 kubelet[2632]: I0213 16:03:55.456583 2632 topology_manager.go:215] "Topology Admit Handler" podUID="5deec7c1-bab6-4079-af42-81f34ece2aef" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 16:03:55.464195 systemd[1]: Created slice kubepods-besteffort-pod5deec7c1_bab6_4079_af42_81f34ece2aef.slice - libcontainer container kubepods-besteffort-pod5deec7c1_bab6_4079_af42_81f34ece2aef.slice. Feb 13 16:03:55.483124 kubelet[2632]: E0213 16:03:55.483079 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:55.638798 kubelet[2632]: I0213 16:03:55.638710 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5deec7c1-bab6-4079-af42-81f34ece2aef-data\") pod \"nfs-server-provisioner-0\" (UID: \"5deec7c1-bab6-4079-af42-81f34ece2aef\") " pod="default/nfs-server-provisioner-0" Feb 13 16:03:55.638798 kubelet[2632]: I0213 16:03:55.638805 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75b7w\" (UniqueName: \"kubernetes.io/projected/5deec7c1-bab6-4079-af42-81f34ece2aef-kube-api-access-75b7w\") pod \"nfs-server-provisioner-0\" (UID: \"5deec7c1-bab6-4079-af42-81f34ece2aef\") " pod="default/nfs-server-provisioner-0" Feb 13 16:03:55.767026 containerd[1702]: time="2025-02-13T16:03:55.766900855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5deec7c1-bab6-4079-af42-81f34ece2aef,Namespace:default,Attempt:0,}" Feb 13 16:03:55.929599 systemd-networkd[1441]: cali60e51b789ff: Link UP Feb 13 16:03:55.929843 systemd-networkd[1441]: cali60e51b789ff: Gained carrier Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.860 [INFO][4253] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.22-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5deec7c1-bab6-4079-af42-81f34ece2aef 1416 0 2025-02-13 16:03:55 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.8.22 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.860 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.886 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" HandleID="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Workload="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.897 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" HandleID="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Workload="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba660), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.22", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 16:03:55.886937643 +0000 UTC"}, Hostname:"10.200.8.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.897 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.897 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.897 [INFO][4264] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.22' Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.899 [INFO][4264] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.902 [INFO][4264] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.906 [INFO][4264] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.907 [INFO][4264] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.909 [INFO][4264] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.909 [INFO][4264] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.910 [INFO][4264] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115 Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.918 [INFO][4264] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.924 [INFO][4264] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.67/26] block=192.168.83.64/26 handle="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.924 [INFO][4264] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.67/26] handle="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" host="10.200.8.22" Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.924 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:03:55.945245 containerd[1702]: 2025-02-13 16:03:55.924 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.67/26] IPv6=[] ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" HandleID="k8s-pod-network.b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Workload="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.946313 containerd[1702]: 2025-02-13 16:03:55.925 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5deec7c1-bab6-4079-af42-81f34ece2aef", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:55.946313 containerd[1702]: 2025-02-13 16:03:55.925 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.67/32] ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.946313 containerd[1702]: 2025-02-13 16:03:55.925 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.946313 containerd[1702]: 2025-02-13 16:03:55.929 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.946682 containerd[1702]: 2025-02-13 16:03:55.930 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5deec7c1-bab6-4079-af42-81f34ece2aef", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:3d:12:57:03:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:03:55.946682 containerd[1702]: 2025-02-13 16:03:55.943 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.22-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:03:55.979500 containerd[1702]: time="2025-02-13T16:03:55.979314265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:03:55.979500 containerd[1702]: time="2025-02-13T16:03:55.979366066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:03:55.979500 containerd[1702]: time="2025-02-13T16:03:55.979378866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:55.979500 containerd[1702]: time="2025-02-13T16:03:55.979469369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:03:56.009714 systemd[1]: Started cri-containerd-b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115.scope - libcontainer container b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115. Feb 13 16:03:56.052412 containerd[1702]: time="2025-02-13T16:03:56.052358722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5deec7c1-bab6-4079-af42-81f34ece2aef,Namespace:default,Attempt:0,} returns sandbox id \"b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115\"" Feb 13 16:03:56.054297 containerd[1702]: time="2025-02-13T16:03:56.054236167Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 16:03:56.484061 kubelet[2632]: E0213 16:03:56.483999 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:56.753358 systemd[1]: run-containerd-runc-k8s.io-b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115-runc.HLYGDZ.mount: Deactivated successfully. Feb 13 16:03:57.154857 systemd-networkd[1441]: cali60e51b789ff: Gained IPv6LL Feb 13 16:03:57.484525 kubelet[2632]: E0213 16:03:57.484362 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:58.484807 kubelet[2632]: E0213 16:03:58.484686 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:03:58.610257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297328258.mount: Deactivated successfully. Feb 13 16:03:59.486550 kubelet[2632]: E0213 16:03:59.486464 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:00.486933 kubelet[2632]: E0213 16:04:00.486881 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:01.488034 kubelet[2632]: E0213 16:04:01.487968 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:02.488983 kubelet[2632]: E0213 16:04:02.488914 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:03.489393 kubelet[2632]: E0213 16:04:03.489324 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:04.490484 kubelet[2632]: E0213 16:04:04.490414 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:05.491067 kubelet[2632]: E0213 16:04:05.490988 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:06.061084 containerd[1702]: time="2025-02-13T16:04:06.061008951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:04:06.066757 containerd[1702]: time="2025-02-13T16:04:06.066673393Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 16:04:06.070542 containerd[1702]: time="2025-02-13T16:04:06.070491888Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:04:06.081645 containerd[1702]: time="2025-02-13T16:04:06.081580465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:04:06.082838 containerd[1702]: time="2025-02-13T16:04:06.082680093Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 10.028409725s" Feb 13 16:04:06.082838 containerd[1702]: time="2025-02-13T16:04:06.082724394Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 16:04:06.084998 containerd[1702]: time="2025-02-13T16:04:06.084968050Z" level=info msg="CreateContainer within sandbox \"b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 16:04:06.131719 containerd[1702]: time="2025-02-13T16:04:06.131655817Z" level=info msg="CreateContainer within sandbox \"b288cfdef6c3da95cf2cc5b6d60f567af706b5eaf2e58e98a6c6f8c857da0115\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"952da7d6a7d83fabf1c6effa2a8dc2496b05c4f961ac494d4e7c41d68fe80144\"" Feb 13 16:04:06.132441 containerd[1702]: time="2025-02-13T16:04:06.132320034Z" level=info msg="StartContainer for \"952da7d6a7d83fabf1c6effa2a8dc2496b05c4f961ac494d4e7c41d68fe80144\"" Feb 13 16:04:06.167671 systemd[1]: Started cri-containerd-952da7d6a7d83fabf1c6effa2a8dc2496b05c4f961ac494d4e7c41d68fe80144.scope - libcontainer container 952da7d6a7d83fabf1c6effa2a8dc2496b05c4f961ac494d4e7c41d68fe80144. Feb 13 16:04:06.197949 containerd[1702]: time="2025-02-13T16:04:06.197896973Z" level=info msg="StartContainer for \"952da7d6a7d83fabf1c6effa2a8dc2496b05c4f961ac494d4e7c41d68fe80144\" returns successfully" Feb 13 16:04:06.491809 kubelet[2632]: E0213 16:04:06.491740 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:06.799064 kubelet[2632]: I0213 16:04:06.798890 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7694997510000001 podStartE2EDuration="11.798846599s" podCreationTimestamp="2025-02-13 16:03:55 +0000 UTC" firstStartedPulling="2025-02-13 16:03:56.053771756 +0000 UTC m=+42.136292748" lastFinishedPulling="2025-02-13 16:04:06.083118504 +0000 UTC m=+52.165639596" observedRunningTime="2025-02-13 16:04:06.798694295 +0000 UTC m=+52.881215387" watchObservedRunningTime="2025-02-13 16:04:06.798846599 +0000 UTC m=+52.881367691" Feb 13 16:04:07.492823 kubelet[2632]: E0213 16:04:07.492753 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:08.493154 kubelet[2632]: E0213 16:04:08.493088 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:09.493407 kubelet[2632]: E0213 16:04:09.493352 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:10.493645 kubelet[2632]: E0213 16:04:10.493583 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:11.494739 kubelet[2632]: E0213 16:04:11.494669 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:12.495319 kubelet[2632]: E0213 16:04:12.495249 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:13.496050 kubelet[2632]: E0213 16:04:13.495982 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:13.722083 update_engine[1684]: I20250213 16:04:13.721973 1684 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 16:04:13.722083 update_engine[1684]: I20250213 16:04:13.722060 1684 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 16:04:13.722765 update_engine[1684]: I20250213 16:04:13.722342 1684 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 16:04:13.723100 update_engine[1684]: I20250213 16:04:13.723065 1684 omaha_request_params.cc:62] Current group set to beta Feb 13 16:04:13.723410 update_engine[1684]: I20250213 16:04:13.723225 1684 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 16:04:13.723410 update_engine[1684]: I20250213 16:04:13.723248 1684 update_attempter.cc:643] Scheduling an action processor start. Feb 13 16:04:13.723410 update_engine[1684]: I20250213 16:04:13.723271 1684 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 16:04:13.723410 update_engine[1684]: I20250213 16:04:13.723315 1684 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 16:04:13.723654 update_engine[1684]: I20250213 16:04:13.723428 1684 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 16:04:13.723654 update_engine[1684]: I20250213 16:04:13.723443 1684 omaha_request_action.cc:272] Request: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: Feb 13 16:04:13.723654 update_engine[1684]: I20250213 16:04:13.723455 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:04:13.724124 locksmithd[1718]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 16:04:13.725423 update_engine[1684]: I20250213 16:04:13.725390 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:04:13.725826 update_engine[1684]: I20250213 16:04:13.725795 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:04:13.748424 update_engine[1684]: E20250213 16:04:13.748236 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:04:13.748424 update_engine[1684]: I20250213 16:04:13.748382 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 16:04:14.452571 kubelet[2632]: E0213 16:04:14.452483 2632 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:14.476953 containerd[1702]: time="2025-02-13T16:04:14.476892762Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:04:14.477945 containerd[1702]: time="2025-02-13T16:04:14.477049666Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:04:14.477945 containerd[1702]: time="2025-02-13T16:04:14.477069567Z" level=info msg="StopPodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:04:14.477945 containerd[1702]: time="2025-02-13T16:04:14.477590580Z" level=info msg="RemovePodSandbox for \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:04:14.477945 containerd[1702]: time="2025-02-13T16:04:14.477630481Z" level=info msg="Forcibly stopping sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\"" Feb 13 16:04:14.477945 containerd[1702]: time="2025-02-13T16:04:14.477732783Z" level=info msg="TearDown network for sandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" successfully" Feb 13 16:04:14.488605 containerd[1702]: time="2025-02-13T16:04:14.488557851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.488758 containerd[1702]: time="2025-02-13T16:04:14.488618352Z" level=info msg="RemovePodSandbox \"19bd494302a608a551796bc4e652fe98dbcf1b62dae569edafae9c0952ab8573\" returns successfully" Feb 13 16:04:14.489056 containerd[1702]: time="2025-02-13T16:04:14.488985261Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:04:14.489179 containerd[1702]: time="2025-02-13T16:04:14.489095464Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:04:14.489179 containerd[1702]: time="2025-02-13T16:04:14.489110264Z" level=info msg="StopPodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:04:14.489455 containerd[1702]: time="2025-02-13T16:04:14.489413172Z" level=info msg="RemovePodSandbox for \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:04:14.489455 containerd[1702]: time="2025-02-13T16:04:14.489443273Z" level=info msg="Forcibly stopping sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\"" Feb 13 16:04:14.489595 containerd[1702]: time="2025-02-13T16:04:14.489518575Z" level=info msg="TearDown network for sandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" successfully" Feb 13 16:04:14.496864 kubelet[2632]: E0213 16:04:14.496834 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:14.501426 containerd[1702]: time="2025-02-13T16:04:14.501388168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.501520 containerd[1702]: time="2025-02-13T16:04:14.501445669Z" level=info msg="RemovePodSandbox \"b9ffd044585a6f57f5b57f37305e5c8d005de50c45ec7cfeacd98f51e27b05ca\" returns successfully" Feb 13 16:04:14.501885 containerd[1702]: time="2025-02-13T16:04:14.501854979Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:04:14.502066 containerd[1702]: time="2025-02-13T16:04:14.501962382Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:04:14.502066 containerd[1702]: time="2025-02-13T16:04:14.501982783Z" level=info msg="StopPodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:04:14.502384 containerd[1702]: time="2025-02-13T16:04:14.502326091Z" level=info msg="RemovePodSandbox for \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:04:14.502384 containerd[1702]: time="2025-02-13T16:04:14.502354892Z" level=info msg="Forcibly stopping sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\"" Feb 13 16:04:14.502513 containerd[1702]: time="2025-02-13T16:04:14.502434994Z" level=info msg="TearDown network for sandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" successfully" Feb 13 16:04:14.511175 containerd[1702]: time="2025-02-13T16:04:14.510879303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.511175 containerd[1702]: time="2025-02-13T16:04:14.510934604Z" level=info msg="RemovePodSandbox \"85d5899d3e822b7a011c79c5d494abbe1c4bb7355ec15a266a4577e3b168cbfd\" returns successfully" Feb 13 16:04:14.511808 containerd[1702]: time="2025-02-13T16:04:14.511452917Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:04:14.511808 containerd[1702]: time="2025-02-13T16:04:14.511582620Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:04:14.511808 containerd[1702]: time="2025-02-13T16:04:14.511597220Z" level=info msg="StopPodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:04:14.512827 containerd[1702]: time="2025-02-13T16:04:14.512200135Z" level=info msg="RemovePodSandbox for \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:04:14.512827 containerd[1702]: time="2025-02-13T16:04:14.512227436Z" level=info msg="Forcibly stopping sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\"" Feb 13 16:04:14.512827 containerd[1702]: time="2025-02-13T16:04:14.512312238Z" level=info msg="TearDown network for sandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" successfully" Feb 13 16:04:14.520280 containerd[1702]: time="2025-02-13T16:04:14.520253134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.520369 containerd[1702]: time="2025-02-13T16:04:14.520295535Z" level=info msg="RemovePodSandbox \"5469fa9ecd44e7ae18796f640d650209b933e46f30b6211f228ac1659bf67c12\" returns successfully" Feb 13 16:04:14.520641 containerd[1702]: time="2025-02-13T16:04:14.520614943Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:04:14.520737 containerd[1702]: time="2025-02-13T16:04:14.520712646Z" level=info msg="TearDown network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" successfully" Feb 13 16:04:14.520737 containerd[1702]: time="2025-02-13T16:04:14.520732546Z" level=info msg="StopPodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" returns successfully" Feb 13 16:04:14.521039 containerd[1702]: time="2025-02-13T16:04:14.521008253Z" level=info msg="RemovePodSandbox for \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:04:14.521105 containerd[1702]: time="2025-02-13T16:04:14.521038754Z" level=info msg="Forcibly stopping sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\"" Feb 13 16:04:14.521149 containerd[1702]: time="2025-02-13T16:04:14.521107555Z" level=info msg="TearDown network for sandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" successfully" Feb 13 16:04:14.529757 containerd[1702]: time="2025-02-13T16:04:14.529725968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.529854 containerd[1702]: time="2025-02-13T16:04:14.529775270Z" level=info msg="RemovePodSandbox \"076fbedab649799fdca5ab8f9deb85a39e90ed55b93d9408b3112b90398022c2\" returns successfully" Feb 13 16:04:14.530172 containerd[1702]: time="2025-02-13T16:04:14.530084077Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" Feb 13 16:04:14.530272 containerd[1702]: time="2025-02-13T16:04:14.530179780Z" level=info msg="TearDown network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" successfully" Feb 13 16:04:14.530272 containerd[1702]: time="2025-02-13T16:04:14.530194280Z" level=info msg="StopPodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" returns successfully" Feb 13 16:04:14.530570 containerd[1702]: time="2025-02-13T16:04:14.530540589Z" level=info msg="RemovePodSandbox for \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" Feb 13 16:04:14.530653 containerd[1702]: time="2025-02-13T16:04:14.530571589Z" level=info msg="Forcibly stopping sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\"" Feb 13 16:04:14.530699 containerd[1702]: time="2025-02-13T16:04:14.530642091Z" level=info msg="TearDown network for sandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" successfully" Feb 13 16:04:14.538310 containerd[1702]: time="2025-02-13T16:04:14.538281680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.538390 containerd[1702]: time="2025-02-13T16:04:14.538328681Z" level=info msg="RemovePodSandbox \"9afefe510216751999c8a72bdf21687fe2c08cb483c44a392deef5a631e378df\" returns successfully" Feb 13 16:04:14.538770 containerd[1702]: time="2025-02-13T16:04:14.538722991Z" level=info msg="StopPodSandbox for \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\"" Feb 13 16:04:14.538850 containerd[1702]: time="2025-02-13T16:04:14.538819693Z" level=info msg="TearDown network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" successfully" Feb 13 16:04:14.538850 containerd[1702]: time="2025-02-13T16:04:14.538835094Z" level=info msg="StopPodSandbox for \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" returns successfully" Feb 13 16:04:14.539245 containerd[1702]: time="2025-02-13T16:04:14.539121901Z" level=info msg="RemovePodSandbox for \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\"" Feb 13 16:04:14.539245 containerd[1702]: time="2025-02-13T16:04:14.539225903Z" level=info msg="Forcibly stopping sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\"" Feb 13 16:04:14.539392 containerd[1702]: time="2025-02-13T16:04:14.539307305Z" level=info msg="TearDown network for sandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" successfully" Feb 13 16:04:14.549855 containerd[1702]: time="2025-02-13T16:04:14.549825065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.549940 containerd[1702]: time="2025-02-13T16:04:14.549870366Z" level=info msg="RemovePodSandbox \"0202969389e023fc85965e4663a07b156e246ec441214e42a6e5b65246cfe164\" returns successfully" Feb 13 16:04:14.550236 containerd[1702]: time="2025-02-13T16:04:14.550211275Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:04:14.550343 containerd[1702]: time="2025-02-13T16:04:14.550317278Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:04:14.550406 containerd[1702]: time="2025-02-13T16:04:14.550337978Z" level=info msg="StopPodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:04:14.550676 containerd[1702]: time="2025-02-13T16:04:14.550652086Z" level=info msg="RemovePodSandbox for \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:04:14.550751 containerd[1702]: time="2025-02-13T16:04:14.550676186Z" level=info msg="Forcibly stopping sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\"" Feb 13 16:04:14.550796 containerd[1702]: time="2025-02-13T16:04:14.550748188Z" level=info msg="TearDown network for sandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" successfully" Feb 13 16:04:14.565783 containerd[1702]: time="2025-02-13T16:04:14.565335749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.565783 containerd[1702]: time="2025-02-13T16:04:14.565401950Z" level=info msg="RemovePodSandbox \"b057ab6178f4308be21bb890e7adc2437b5cb7c2bad3aa0c1dfd79a512343635\" returns successfully" Feb 13 16:04:14.566483 containerd[1702]: time="2025-02-13T16:04:14.566457777Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:04:14.566937 containerd[1702]: time="2025-02-13T16:04:14.566723383Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:04:14.566937 containerd[1702]: time="2025-02-13T16:04:14.566741884Z" level=info msg="StopPodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:04:14.567158 containerd[1702]: time="2025-02-13T16:04:14.567127893Z" level=info msg="RemovePodSandbox for \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:04:14.567217 containerd[1702]: time="2025-02-13T16:04:14.567158594Z" level=info msg="Forcibly stopping sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\"" Feb 13 16:04:14.567285 containerd[1702]: time="2025-02-13T16:04:14.567233896Z" level=info msg="TearDown network for sandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" successfully" Feb 13 16:04:14.575318 containerd[1702]: time="2025-02-13T16:04:14.575277995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.575479 containerd[1702]: time="2025-02-13T16:04:14.575330096Z" level=info msg="RemovePodSandbox \"eadb0abb25b073ec4ad74e8e1db3e12dcfc8d469f402707b63aaad909cf4e51e\" returns successfully" Feb 13 16:04:14.575683 containerd[1702]: time="2025-02-13T16:04:14.575654204Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:04:14.575777 containerd[1702]: time="2025-02-13T16:04:14.575753406Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:04:14.575777 containerd[1702]: time="2025-02-13T16:04:14.575772307Z" level=info msg="StopPodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:04:14.576225 containerd[1702]: time="2025-02-13T16:04:14.576192917Z" level=info msg="RemovePodSandbox for \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:04:14.576308 containerd[1702]: time="2025-02-13T16:04:14.576224618Z" level=info msg="Forcibly stopping sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\"" Feb 13 16:04:14.576359 containerd[1702]: time="2025-02-13T16:04:14.576301020Z" level=info msg="TearDown network for sandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" successfully" Feb 13 16:04:14.622153 containerd[1702]: time="2025-02-13T16:04:14.622069751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.622153 containerd[1702]: time="2025-02-13T16:04:14.622137753Z" level=info msg="RemovePodSandbox \"c38458fd0fa4a9be3aa504d2b74717e9877b399a228179097b0eaa938bca3ab7\" returns successfully" Feb 13 16:04:14.622747 containerd[1702]: time="2025-02-13T16:04:14.622633365Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:04:14.622965 containerd[1702]: time="2025-02-13T16:04:14.622761068Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:04:14.622965 containerd[1702]: time="2025-02-13T16:04:14.622777669Z" level=info msg="StopPodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:04:14.623150 containerd[1702]: time="2025-02-13T16:04:14.623070676Z" level=info msg="RemovePodSandbox for \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:04:14.623150 containerd[1702]: time="2025-02-13T16:04:14.623096377Z" level=info msg="Forcibly stopping sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\"" Feb 13 16:04:14.623241 containerd[1702]: time="2025-02-13T16:04:14.623173479Z" level=info msg="TearDown network for sandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" successfully" Feb 13 16:04:14.631351 containerd[1702]: time="2025-02-13T16:04:14.631229678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.631351 containerd[1702]: time="2025-02-13T16:04:14.631280079Z" level=info msg="RemovePodSandbox \"72a08923d6eabbf3b8ca1d201f77beab8aaf9b82717bb1af17654b39506d6f50\" returns successfully" Feb 13 16:04:14.631806 containerd[1702]: time="2025-02-13T16:04:14.631695689Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:04:14.631806 containerd[1702]: time="2025-02-13T16:04:14.631782491Z" level=info msg="TearDown network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" successfully" Feb 13 16:04:14.631806 containerd[1702]: time="2025-02-13T16:04:14.631797292Z" level=info msg="StopPodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" returns successfully" Feb 13 16:04:14.632326 containerd[1702]: time="2025-02-13T16:04:14.632201702Z" level=info msg="RemovePodSandbox for \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:04:14.632326 containerd[1702]: time="2025-02-13T16:04:14.632229903Z" level=info msg="Forcibly stopping sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\"" Feb 13 16:04:14.632326 containerd[1702]: time="2025-02-13T16:04:14.632298904Z" level=info msg="TearDown network for sandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" successfully" Feb 13 16:04:14.645556 containerd[1702]: time="2025-02-13T16:04:14.645066820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.645556 containerd[1702]: time="2025-02-13T16:04:14.645126021Z" level=info msg="RemovePodSandbox \"dd6748293ae24b0f4b26c3ded426c0ee5b1f7f317036f2e42ac8207144a4901f\" returns successfully" Feb 13 16:04:14.646381 containerd[1702]: time="2025-02-13T16:04:14.646347252Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" Feb 13 16:04:14.646481 containerd[1702]: time="2025-02-13T16:04:14.646460254Z" level=info msg="TearDown network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" successfully" Feb 13 16:04:14.646481 containerd[1702]: time="2025-02-13T16:04:14.646474855Z" level=info msg="StopPodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" returns successfully" Feb 13 16:04:14.647841 containerd[1702]: time="2025-02-13T16:04:14.647812588Z" level=info msg="RemovePodSandbox for \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" Feb 13 16:04:14.647932 containerd[1702]: time="2025-02-13T16:04:14.647845289Z" level=info msg="Forcibly stopping sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\"" Feb 13 16:04:14.647987 containerd[1702]: time="2025-02-13T16:04:14.647935091Z" level=info msg="TearDown network for sandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" successfully" Feb 13 16:04:14.661648 containerd[1702]: time="2025-02-13T16:04:14.661597229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.661791 containerd[1702]: time="2025-02-13T16:04:14.661662830Z" level=info msg="RemovePodSandbox \"119afb55cabccc14cb3c4fad8882181a054877c32f6731cc806343c1da907498\" returns successfully" Feb 13 16:04:14.662199 containerd[1702]: time="2025-02-13T16:04:14.662169143Z" level=info msg="StopPodSandbox for \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\"" Feb 13 16:04:14.662313 containerd[1702]: time="2025-02-13T16:04:14.662290146Z" level=info msg="TearDown network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" successfully" Feb 13 16:04:14.662313 containerd[1702]: time="2025-02-13T16:04:14.662307746Z" level=info msg="StopPodSandbox for \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" returns successfully" Feb 13 16:04:14.662698 containerd[1702]: time="2025-02-13T16:04:14.662617754Z" level=info msg="RemovePodSandbox for \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\"" Feb 13 16:04:14.662698 containerd[1702]: time="2025-02-13T16:04:14.662660755Z" level=info msg="Forcibly stopping sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\"" Feb 13 16:04:14.662824 containerd[1702]: time="2025-02-13T16:04:14.662748357Z" level=info msg="TearDown network for sandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" successfully" Feb 13 16:04:14.676403 containerd[1702]: time="2025-02-13T16:04:14.676332193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:04:14.676746 containerd[1702]: time="2025-02-13T16:04:14.676411095Z" level=info msg="RemovePodSandbox \"9e044812f6ac4fbb2810e03311ea43050d05eeb6d33653ec7bac1c00d2163401\" returns successfully" Feb 13 16:04:15.497107 kubelet[2632]: E0213 16:04:15.497042 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:16.497327 kubelet[2632]: E0213 16:04:16.497273 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:17.498209 kubelet[2632]: E0213 16:04:17.498136 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:18.498960 kubelet[2632]: E0213 16:04:18.498888 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:19.499358 kubelet[2632]: E0213 16:04:19.499288 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:20.500443 kubelet[2632]: E0213 16:04:20.500392 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:21.501425 kubelet[2632]: E0213 16:04:21.501353 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:22.501937 kubelet[2632]: E0213 16:04:22.501875 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:23.503113 kubelet[2632]: E0213 16:04:23.503039 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:23.724193 update_engine[1684]: I20250213 16:04:23.724083 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:04:23.724827 update_engine[1684]: I20250213 16:04:23.724480 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:04:23.724993 update_engine[1684]: I20250213 16:04:23.724934 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:04:23.740678 update_engine[1684]: E20250213 16:04:23.740613 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:04:23.740822 update_engine[1684]: I20250213 16:04:23.740716 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 16:04:24.504060 kubelet[2632]: E0213 16:04:24.503988 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:25.504898 kubelet[2632]: E0213 16:04:25.504826 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:26.505213 kubelet[2632]: E0213 16:04:26.505143 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:27.506062 kubelet[2632]: E0213 16:04:27.505993 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:28.506957 kubelet[2632]: E0213 16:04:28.506887 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:29.507268 kubelet[2632]: E0213 16:04:29.507191 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:30.508249 kubelet[2632]: E0213 16:04:30.508181 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:30.628316 kubelet[2632]: I0213 16:04:30.628272 2632 topology_manager.go:215] "Topology Admit Handler" podUID="34a45dbb-4f4f-4303-9e42-110280c8ea3a" podNamespace="default" podName="test-pod-1" Feb 13 16:04:30.634196 systemd[1]: Created slice kubepods-besteffort-pod34a45dbb_4f4f_4303_9e42_110280c8ea3a.slice - libcontainer container kubepods-besteffort-pod34a45dbb_4f4f_4303_9e42_110280c8ea3a.slice. Feb 13 16:04:30.823075 kubelet[2632]: I0213 16:04:30.822916 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a04e5622-77f7-4123-8abc-495dd138951a\" (UniqueName: \"kubernetes.io/nfs/34a45dbb-4f4f-4303-9e42-110280c8ea3a-pvc-a04e5622-77f7-4123-8abc-495dd138951a\") pod \"test-pod-1\" (UID: \"34a45dbb-4f4f-4303-9e42-110280c8ea3a\") " pod="default/test-pod-1" Feb 13 16:04:30.823075 kubelet[2632]: I0213 16:04:30.822990 2632 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mq2w\" (UniqueName: \"kubernetes.io/projected/34a45dbb-4f4f-4303-9e42-110280c8ea3a-kube-api-access-6mq2w\") pod \"test-pod-1\" (UID: \"34a45dbb-4f4f-4303-9e42-110280c8ea3a\") " pod="default/test-pod-1" Feb 13 16:04:31.247566 kernel: FS-Cache: Loaded Feb 13 16:04:31.380689 kernel: RPC: Registered named UNIX socket transport module. Feb 13 16:04:31.380832 kernel: RPC: Registered udp transport module. Feb 13 16:04:31.380855 kernel: RPC: Registered tcp transport module. Feb 13 16:04:31.384622 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 16:04:31.384699 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 16:04:31.508911 kubelet[2632]: E0213 16:04:31.508769 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:31.691010 kernel: NFS: Registering the id_resolver key type Feb 13 16:04:31.691163 kernel: Key type id_resolver registered Feb 13 16:04:31.691183 kernel: Key type id_legacy registered Feb 13 16:04:31.857256 nfsidmap[4486]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-f44757c054' Feb 13 16:04:31.880677 nfsidmap[4487]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-a-f44757c054' Feb 13 16:04:32.137643 containerd[1702]: time="2025-02-13T16:04:32.137472211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34a45dbb-4f4f-4303-9e42-110280c8ea3a,Namespace:default,Attempt:0,}" Feb 13 16:04:32.302770 systemd-networkd[1441]: cali5ec59c6bf6e: Link UP Feb 13 16:04:32.304132 systemd-networkd[1441]: cali5ec59c6bf6e: Gained carrier Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.224 [INFO][4488] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.22-k8s-test--pod--1-eth0 default 34a45dbb-4f4f-4303-9e42-110280c8ea3a 1518 0 2025-02-13 16:03:56 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.8.22 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.224 [INFO][4488] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.252 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" HandleID="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Workload="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.265 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" HandleID="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Workload="10.200.8.22-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.22", "pod":"test-pod-1", "timestamp":"2025-02-13 16:04:32.252456851 +0000 UTC"}, Hostname:"10.200.8.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.265 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.265 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.265 [INFO][4499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.22' Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.267 [INFO][4499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.271 [INFO][4499] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.274 [INFO][4499] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.276 [INFO][4499] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.278 [INFO][4499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.278 [INFO][4499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.280 [INFO][4499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.286 [INFO][4499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.296 [INFO][4499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.68/26] block=192.168.83.64/26 handle="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.296 [INFO][4499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.68/26] handle="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" host="10.200.8.22" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.296 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.296 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.68/26] IPv6=[] ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" HandleID="k8s-pod-network.6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Workload="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.314819 containerd[1702]: 2025-02-13 16:04:32.298 [INFO][4488] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"34a45dbb-4f4f-4303-9e42-110280c8ea3a", ResourceVersion:"1518", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:04:32.317874 containerd[1702]: 2025-02-13 16:04:32.298 [INFO][4488] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.68/32] ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.317874 containerd[1702]: 2025-02-13 16:04:32.298 [INFO][4488] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.317874 containerd[1702]: 2025-02-13 16:04:32.302 [INFO][4488] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.317874 containerd[1702]: 2025-02-13 16:04:32.302 [INFO][4488] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"34a45dbb-4f4f-4303-9e42-110280c8ea3a", ResourceVersion:"1518", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.22", ContainerID:"6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"3e:59:38:ff:88:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:04:32.317874 containerd[1702]: 2025-02-13 16:04:32.313 [INFO][4488] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.22-k8s-test--pod--1-eth0" Feb 13 16:04:32.347256 containerd[1702]: time="2025-02-13T16:04:32.347051688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:04:32.347256 containerd[1702]: time="2025-02-13T16:04:32.347132090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:04:32.347256 containerd[1702]: time="2025-02-13T16:04:32.347150990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:04:32.347883 containerd[1702]: time="2025-02-13T16:04:32.347318695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:04:32.373103 systemd[1]: run-containerd-runc-k8s.io-6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d-runc.YG7In3.mount: Deactivated successfully. Feb 13 16:04:32.383717 systemd[1]: Started cri-containerd-6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d.scope - libcontainer container 6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d. Feb 13 16:04:32.426361 containerd[1702]: time="2025-02-13T16:04:32.426324946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34a45dbb-4f4f-4303-9e42-110280c8ea3a,Namespace:default,Attempt:0,} returns sandbox id \"6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d\"" Feb 13 16:04:32.428120 containerd[1702]: time="2025-02-13T16:04:32.428027288Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:04:32.509367 kubelet[2632]: E0213 16:04:32.509317 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:32.980699 containerd[1702]: time="2025-02-13T16:04:32.980624338Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:04:32.984409 containerd[1702]: time="2025-02-13T16:04:32.984329030Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 16:04:32.987368 containerd[1702]: time="2025-02-13T16:04:32.987334704Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 559.273915ms" Feb 13 16:04:32.987368 containerd[1702]: time="2025-02-13T16:04:32.987368405Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:04:32.989403 containerd[1702]: time="2025-02-13T16:04:32.989370154Z" level=info msg="CreateContainer within sandbox \"6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 16:04:33.030193 containerd[1702]: time="2025-02-13T16:04:33.030140462Z" level=info msg="CreateContainer within sandbox \"6c5678211501edbd6555a78f79a625a5a02e262d7478d8fef9cd0e6de90bdf5d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5bdeef2049860c0e2e80e3ffc36c6121a7a3c695abcfb9a86c935d3818d4d2c3\"" Feb 13 16:04:33.030843 containerd[1702]: time="2025-02-13T16:04:33.030693975Z" level=info msg="StartContainer for \"5bdeef2049860c0e2e80e3ffc36c6121a7a3c695abcfb9a86c935d3818d4d2c3\"" Feb 13 16:04:33.061741 systemd[1]: Started cri-containerd-5bdeef2049860c0e2e80e3ffc36c6121a7a3c695abcfb9a86c935d3818d4d2c3.scope - libcontainer container 5bdeef2049860c0e2e80e3ffc36c6121a7a3c695abcfb9a86c935d3818d4d2c3. Feb 13 16:04:33.094624 containerd[1702]: time="2025-02-13T16:04:33.094576153Z" level=info msg="StartContainer for \"5bdeef2049860c0e2e80e3ffc36c6121a7a3c695abcfb9a86c935d3818d4d2c3\" returns successfully" Feb 13 16:04:33.509698 kubelet[2632]: E0213 16:04:33.509629 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:33.715275 update_engine[1684]: I20250213 16:04:33.715160 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:04:33.715908 update_engine[1684]: I20250213 16:04:33.715596 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:04:33.716037 update_engine[1684]: I20250213 16:04:33.715996 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:04:33.730898 update_engine[1684]: E20250213 16:04:33.730821 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:04:33.731048 update_engine[1684]: I20250213 16:04:33.730931 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 16:04:33.866316 kubelet[2632]: I0213 16:04:33.866269 2632 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=37.306243481 podStartE2EDuration="37.866222914s" podCreationTimestamp="2025-02-13 16:03:56 +0000 UTC" firstStartedPulling="2025-02-13 16:04:32.42769608 +0000 UTC m=+78.510217072" lastFinishedPulling="2025-02-13 16:04:32.987675413 +0000 UTC m=+79.070196505" observedRunningTime="2025-02-13 16:04:33.866136212 +0000 UTC m=+79.948657204" watchObservedRunningTime="2025-02-13 16:04:33.866222914 +0000 UTC m=+79.948743906" Feb 13 16:04:34.338875 systemd-networkd[1441]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 16:04:34.452884 kubelet[2632]: E0213 16:04:34.452830 2632 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:34.510241 kubelet[2632]: E0213 16:04:34.510168 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:35.511057 kubelet[2632]: E0213 16:04:35.510985 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:36.511789 kubelet[2632]: E0213 16:04:36.511719 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:37.512974 kubelet[2632]: E0213 16:04:37.512902 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:38.514073 kubelet[2632]: E0213 16:04:38.514003 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:39.514616 kubelet[2632]: E0213 16:04:39.514521 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:40.515600 kubelet[2632]: E0213 16:04:40.515514 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:41.516688 kubelet[2632]: E0213 16:04:41.516623 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:42.517805 kubelet[2632]: E0213 16:04:42.517656 2632 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:04:42.698976 systemd[1]: run-containerd-runc-k8s.io-15f66ef3d3a1f1fb9ca2e2691bc6ea855aa9f72c27a5cacbbacc6fe154a3dfb0-runc.trxOJv.mount: Deactivated successfully.