Jan 14 13:20:53.079304 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:20:53.079342 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.079356 kernel: BIOS-provided physical RAM map: Jan 14 13:20:53.079367 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:20:53.079376 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:20:53.079387 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:20:53.079398 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:20:53.079413 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:20:53.079422 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:20:53.079432 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:20:53.079440 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:20:53.079449 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:20:53.079456 kernel: NX (Execute Disable) protection: active Jan 14 13:20:53.079466 kernel: APIC: Static calls initialized Jan 14 13:20:53.079480 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:20:53.079488 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:20:53.079496 kernel: random: crng init done Jan 14 13:20:53.079505 kernel: secureboot: Secure boot disabled Jan 14 13:20:53.079572 kernel: SMBIOS 3.1.0 present. Jan 14 13:20:53.079581 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:20:53.079592 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:20:53.079599 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:20:53.079607 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:20:53.079616 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:20:53.079626 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:20:53.079636 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:20:53.079643 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:20:53.079651 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:20:53.079658 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:20:53.079669 kernel: tsc: Detected 2593.904 MHz processor Jan 14 13:20:53.079676 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:20:53.079686 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:20:53.079693 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:20:53.079706 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:20:53.079713 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:20:53.079721 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:20:53.079730 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:20:53.079738 kernel: Using GB pages for direct mapping Jan 14 13:20:53.079744 kernel: ACPI: Early table checksum verification disabled Jan 14 13:20:53.079752 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:20:53.079762 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079772 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079780 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:20:53.079787 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:20:53.079795 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079802 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079810 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079819 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079828 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079835 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079843 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079851 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:20:53.079861 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:20:53.079868 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:20:53.079878 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:20:53.079887 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:20:53.079898 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:20:53.079907 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:20:53.079915 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:20:53.079925 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:20:53.079932 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:20:53.079942 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:20:53.079951 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:20:53.079961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:20:53.079972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:20:53.079982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:20:53.079990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:20:53.079998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:20:53.080008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:20:53.080015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:20:53.080026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:20:53.080033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:20:53.080043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:20:53.080054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:20:53.080062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:20:53.080072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:20:53.080082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:20:53.080091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:20:53.080101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:20:53.080112 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:20:53.080121 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:20:53.080130 kernel: Zone ranges: Jan 14 13:20:53.080142 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:20:53.080151 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:20:53.080161 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:20:53.080168 kernel: Movable zone start for each node Jan 14 13:20:53.080179 kernel: Early memory node ranges Jan 14 13:20:53.080187 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:20:53.080196 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:20:53.080205 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:20:53.080212 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:20:53.080225 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:20:53.080232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:20:53.080243 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:20:53.080251 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:20:53.080260 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:20:53.080269 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:20:53.080280 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:20:53.080287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:20:53.080297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:20:53.080308 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:20:53.080317 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:20:53.080330 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:20:53.080337 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:20:53.080347 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:20:53.080356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:20:53.080363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:20:53.080374 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:20:53.080381 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:20:53.080394 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:20:53.080402 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:20:53.080413 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.080422 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:20:53.080430 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:20:53.080440 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:20:53.080447 kernel: Fallback order for Node 0: 0 Jan 14 13:20:53.080457 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:20:53.080468 kernel: Policy zone: Normal Jan 14 13:20:53.080484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:20:53.080495 kernel: software IO TLB: area num 2. Jan 14 13:20:53.080505 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:20:53.080523 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:20:53.080534 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:20:53.080543 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:20:53.080552 kernel: Dynamic Preempt: voluntary Jan 14 13:20:53.080562 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:20:53.080571 kernel: rcu: RCU event tracing is enabled. Jan 14 13:20:53.080582 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:20:53.080594 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:20:53.080604 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:20:53.080612 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:20:53.080624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:20:53.080632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:20:53.080645 kernel: Using NULL legacy PIC Jan 14 13:20:53.080656 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:20:53.080665 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:20:53.080675 kernel: Console: colour dummy device 80x25 Jan 14 13:20:53.080684 kernel: printk: console [tty1] enabled Jan 14 13:20:53.080692 kernel: printk: console [ttyS0] enabled Jan 14 13:20:53.080703 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:20:53.080711 kernel: ACPI: Core revision 20230628 Jan 14 13:20:53.080722 kernel: Failed to register legacy timer interrupt Jan 14 13:20:53.080730 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:20:53.080744 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:20:53.080752 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:20:53.080763 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:20:53.080771 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:20:53.080781 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:20:53.080791 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:20:53.080799 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:20:53.080810 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:20:53.080818 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 14 13:20:53.080831 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:20:53.080839 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:20:53.080850 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:20:53.080858 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:20:53.080869 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:20:53.080878 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:20:53.080888 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:20:53.080897 kernel: RETBleed: Vulnerable Jan 14 13:20:53.080905 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:20:53.080916 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:20:53.080926 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:20:53.080937 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:20:53.080945 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:20:53.080956 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:20:53.080964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:20:53.080974 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:20:53.080983 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:20:53.080991 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:20:53.081001 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:20:53.081012 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:20:53.081020 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:20:53.081034 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:20:53.081042 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:20:53.081052 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:20:53.081061 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:20:53.081069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:20:53.081080 kernel: landlock: Up and running. Jan 14 13:20:53.081087 kernel: SELinux: Initializing. Jan 14 13:20:53.081098 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.081106 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.081117 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:20:53.081126 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081139 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081158 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:20:53.081167 kernel: signal: max sigframe size: 3632 Jan 14 13:20:53.081175 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:20:53.081186 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:20:53.081194 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:20:53.081205 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:20:53.081213 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:20:53.081227 kernel: .... node #0, CPUs: #1 Jan 14 13:20:53.081235 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:20:53.081246 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:20:53.081254 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:20:53.081264 kernel: smpboot: Max logical packages: 1 Jan 14 13:20:53.081273 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 14 13:20:53.081282 kernel: devtmpfs: initialized Jan 14 13:20:53.081293 kernel: x86/mm: Memory block size: 128MB Jan 14 13:20:53.081303 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:20:53.081314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:20:53.081322 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:20:53.081333 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:20:53.081342 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:20:53.081350 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:20:53.081361 kernel: audit: type=2000 audit(1736860851.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:20:53.081372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:20:53.081380 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:20:53.081394 kernel: cpuidle: using governor menu Jan 14 13:20:53.081402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:20:53.081414 kernel: dca service started, version 1.12.1 Jan 14 13:20:53.081422 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:20:53.081432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:20:53.081441 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:20:53.081449 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:20:53.081460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:20:53.081468 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:20:53.081481 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:20:53.081489 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:20:53.081500 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:20:53.081510 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:20:53.081527 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:20:53.081540 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:20:53.081561 kernel: ACPI: Interpreter enabled Jan 14 13:20:53.081580 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:20:53.081597 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:20:53.081623 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:20:53.081640 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:20:53.081659 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:20:53.081674 kernel: iommu: Default domain type: Translated Jan 14 13:20:53.081689 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:20:53.081704 kernel: efivars: Registered efivars operations Jan 14 13:20:53.081718 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:20:53.081734 kernel: PCI: System does not support PCI Jan 14 13:20:53.081751 kernel: vgaarb: loaded Jan 14 13:20:53.081772 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:20:53.081790 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:20:53.081809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:20:53.081826 kernel: pnp: PnP ACPI init Jan 14 13:20:53.081843 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:20:53.081858 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:20:53.081874 kernel: NET: Registered PF_INET protocol family Jan 14 13:20:53.081890 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:20:53.081909 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:20:53.081928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:20:53.081942 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:20:53.081959 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:20:53.081975 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:20:53.081992 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.082008 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.082024 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:20:53.082041 kernel: NET: Registered PF_XDP protocol family Jan 14 13:20:53.082057 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:20:53.082078 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:20:53.082095 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:20:53.082113 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:20:53.082133 kernel: Initialise system trusted keyrings Jan 14 13:20:53.082148 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:20:53.082166 kernel: Key type asymmetric registered Jan 14 13:20:53.082182 kernel: Asymmetric key parser 'x509' registered Jan 14 13:20:53.082199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:20:53.082214 kernel: io scheduler mq-deadline registered Jan 14 13:20:53.082235 kernel: io scheduler kyber registered Jan 14 13:20:53.082252 kernel: io scheduler bfq registered Jan 14 13:20:53.082267 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:20:53.082280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:20:53.082293 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:20:53.082306 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:20:53.082328 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:20:53.082588 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:20:53.082712 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:20:52 UTC (1736860852) Jan 14 13:20:53.082803 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:20:53.082816 kernel: intel_pstate: CPU model not supported Jan 14 13:20:53.082825 kernel: efifb: probing for efifb Jan 14 13:20:53.082835 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:20:53.082844 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:20:53.082852 kernel: efifb: scrolling: redraw Jan 14 13:20:53.082863 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:20:53.082871 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:20:53.082885 kernel: fb0: EFI VGA frame buffer device Jan 14 13:20:53.082893 kernel: pstore: Using crash dump compression: deflate Jan 14 13:20:53.082904 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:20:53.082913 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:20:53.082923 kernel: Segment Routing with IPv6 Jan 14 13:20:53.082932 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:20:53.082941 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:20:53.082950 kernel: Key type dns_resolver registered Jan 14 13:20:53.082959 kernel: IPI shorthand broadcast: enabled Jan 14 13:20:53.082972 kernel: sched_clock: Marking stable (876003700, 59809400)->(1185346000, -249532900) Jan 14 13:20:53.082980 kernel: registered taskstats version 1 Jan 14 13:20:53.082991 kernel: Loading compiled-in X.509 certificates Jan 14 13:20:53.083001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:20:53.083010 kernel: Key type .fscrypt registered Jan 14 13:20:53.083021 kernel: Key type fscrypt-provisioning registered Jan 14 13:20:53.083032 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:20:53.083042 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:20:53.083055 kernel: ima: No architecture policies found Jan 14 13:20:53.083064 kernel: clk: Disabling unused clocks Jan 14 13:20:53.083073 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:20:53.083083 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:20:53.083091 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:20:53.083102 kernel: Run /init as init process Jan 14 13:20:53.083110 kernel: with arguments: Jan 14 13:20:53.083120 kernel: /init Jan 14 13:20:53.083128 kernel: with environment: Jan 14 13:20:53.083138 kernel: HOME=/ Jan 14 13:20:53.083149 kernel: TERM=linux Jan 14 13:20:53.083163 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:20:53.083173 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:20:53.083186 systemd[1]: Detected virtualization microsoft. Jan 14 13:20:53.083195 systemd[1]: Detected architecture x86-64. Jan 14 13:20:53.083206 systemd[1]: Running in initrd. Jan 14 13:20:53.083214 systemd[1]: No hostname configured, using default hostname. Jan 14 13:20:53.083227 systemd[1]: Hostname set to . Jan 14 13:20:53.083236 systemd[1]: Initializing machine ID from random generator. Jan 14 13:20:53.083247 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:20:53.083256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:20:53.083266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:20:53.083276 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:20:53.083287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:20:53.083296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:20:53.083309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:20:53.083320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:20:53.083330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:20:53.083340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:20:53.083349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:20:53.083359 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:20:53.083368 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:20:53.083381 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:20:53.083393 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:20:53.083404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:20:53.083415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:20:53.083425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:20:53.083435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:20:53.083446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:20:53.083455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:20:53.083466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:20:53.083477 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:20:53.083488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:20:53.083497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:20:53.083508 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:20:53.083530 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:20:53.083538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:20:53.083550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:20:53.083558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:53.083589 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:20:53.083612 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:20:53.083622 systemd-journald[177]: Journal started Jan 14 13:20:53.083645 systemd-journald[177]: Runtime Journal (/run/log/journal/1a52b557a08a49d087c328e9f30f3546) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:20:53.085212 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:20:53.094527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:20:53.103529 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:20:53.110640 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:20:53.124532 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:20:53.126256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:20:53.136581 kernel: Bridge firewalling registered Jan 14 13:20:53.136662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:20:53.139778 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:20:53.145267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:20:53.148300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:53.151475 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:20:53.163386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:20:53.176665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:53.180655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:53.181536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:20:53.201830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:20:53.206107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:53.219686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:20:53.224987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:53.232693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:20:53.253103 dracut-cmdline[214]: dracut-dracut-053 Jan 14 13:20:53.256859 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.297909 systemd-resolved[212]: Positive Trust Anchors: Jan 14 13:20:53.297925 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:20:53.297984 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:20:53.301603 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 14 13:20:53.302825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:20:53.305726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:20:53.337967 kernel: SCSI subsystem initialized Jan 14 13:20:53.348534 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:20:53.358542 kernel: iscsi: registered transport (tcp) Jan 14 13:20:53.379746 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:20:53.379835 kernel: QLogic iSCSI HBA Driver Jan 14 13:20:53.415195 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:20:53.423700 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:20:53.450798 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:20:53.450878 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:20:53.455537 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:20:53.493548 kernel: raid6: avx512x4 gen() 17978 MB/s Jan 14 13:20:53.515536 kernel: raid6: avx512x2 gen() 17975 MB/s Jan 14 13:20:53.534528 kernel: raid6: avx512x1 gen() 18055 MB/s Jan 14 13:20:53.553531 kernel: raid6: avx2x4 gen() 17838 MB/s Jan 14 13:20:53.572533 kernel: raid6: avx2x2 gen() 18013 MB/s Jan 14 13:20:53.592924 kernel: raid6: avx2x1 gen() 13832 MB/s Jan 14 13:20:53.592963 kernel: raid6: using algorithm avx512x1 gen() 18055 MB/s Jan 14 13:20:53.614906 kernel: raid6: .... xor() 25847 MB/s, rmw enabled Jan 14 13:20:53.614950 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:20:53.636536 kernel: xor: automatically using best checksumming function avx Jan 14 13:20:53.782539 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:20:53.792420 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:20:53.802679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:20:53.815969 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:20:53.820425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:20:53.832533 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:20:53.848771 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 14 13:20:53.876792 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:20:53.886652 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:20:53.926126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:20:53.942487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:20:53.973817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:20:53.978046 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:20:53.986427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:20:53.992328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:20:54.003721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:20:54.007916 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:20:54.033137 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:20:54.042483 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:20:54.042524 kernel: AES CTR mode by8 optimization enabled Jan 14 13:20:54.053085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:20:54.058282 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:20:54.057162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:54.064803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:54.067592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:20:54.067859 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.070682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.086167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.098084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:20:54.100717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.112652 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:20:54.112711 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:20:54.118410 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:20:54.117851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.126548 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:20:54.133552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:20:54.140533 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:20:54.148680 kernel: PTP clock support registered Jan 14 13:20:54.153118 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:20:54.158533 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:20:54.159578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.170359 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:20:54.182629 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:20:54.172837 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:54.189427 kernel: scsi host0: storvsc_host_t Jan 14 13:20:54.189492 kernel: scsi host1: storvsc_host_t Jan 14 13:20:54.193533 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:20:54.193561 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:20:54.193590 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:20:54.200208 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:20:54.300647 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:20:54.300689 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:20:54.300810 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:20:54.300686 systemd-resolved[212]: Clock change detected. Flushing caches. Jan 14 13:20:54.324531 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:20:54.328749 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:20:54.328774 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:20:54.324987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:54.343301 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:20:54.358134 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:20:54.358325 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:20:54.358482 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:20:54.358660 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:20:54.358816 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:54.358836 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:20:54.876618 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:20:54.940649 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Jan 14 13:20:54.955253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:20:54.979629 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (450) Jan 14 13:20:54.992720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:20:55.001209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:20:55.015674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:20:55.031769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:20:55.047624 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:55.054631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:56.061638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:56.062221 disk-uuid[592]: The operation has completed successfully. Jan 14 13:20:56.145876 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:20:56.145989 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:20:56.161150 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:20:56.169318 sh[679]: Success Jan 14 13:20:56.201239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:20:56.394962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:20:56.405727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:20:56.410635 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:20:56.427408 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:20:56.427471 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:56.430870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:20:56.433466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:20:56.435769 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:20:56.794632 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: VF slot 1 added Jan 14 13:20:56.796530 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:20:56.799872 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:20:56.812430 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:20:56.812624 kernel: hv_pci a03696ac-5c50-43c0-be44-65b2f0002e34: PCI VMBus probing: Using version 0x10004 Jan 14 13:20:56.871944 kernel: hv_pci a03696ac-5c50-43c0-be44-65b2f0002e34: PCI host bridge to bus 5c50:00 Jan 14 13:20:56.872255 kernel: pci_bus 5c50:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:20:56.872446 kernel: pci_bus 5c50:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:20:56.872598 kernel: pci 5c50:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:20:56.873183 kernel: pci 5c50:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:20:56.873352 kernel: pci 5c50:00:02.0: enabling Extended Tags Jan 14 13:20:56.873519 kernel: pci 5c50:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c50:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:20:56.873722 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:56.873743 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:56.873762 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:20:56.873781 kernel: pci_bus 5c50:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:20:56.873941 kernel: pci 5c50:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:20:56.817245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:20:56.830816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:20:56.906630 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:20:56.924629 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:56.924837 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:20:56.937628 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:20:56.949219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:20:56.972911 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:20:56.988463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:20:57.011109 systemd-networkd[863]: lo: Link UP Jan 14 13:20:57.013194 systemd-networkd[863]: lo: Gained carrier Jan 14 13:20:57.018075 systemd-networkd[863]: Enumeration completed Jan 14 13:20:57.020347 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:20:57.023254 systemd[1]: Reached target network.target - Network. Jan 14 13:20:57.030360 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:20:57.030365 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:20:57.039962 systemd-networkd[863]: eth0: Link UP Jan 14 13:20:57.042145 systemd-networkd[863]: eth0: Gained carrier Jan 14 13:20:57.042159 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:20:57.109944 kernel: mlx5_core 5c50:00:02.0: enabling device (0000 -> 0002) Jan 14 13:20:57.343017 kernel: mlx5_core 5c50:00:02.0: firmware version: 14.30.5000 Jan 14 13:20:57.343233 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: VF registering: eth1 Jan 14 13:20:57.343396 kernel: mlx5_core 5c50:00:02.0 eth1: joined to eth0 Jan 14 13:20:57.343585 kernel: mlx5_core 5c50:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:20:57.129661 systemd-networkd[863]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:20:57.353838 kernel: mlx5_core 5c50:00:02.0 enP23632s1: renamed from eth1 Jan 14 13:20:57.359862 systemd-networkd[863]: eth1: Interface name change detected, renamed to enP23632s1. Jan 14 13:20:57.488380 systemd-networkd[863]: enP23632s1: Link UP Jan 14 13:20:57.490745 kernel: mlx5_core 5c50:00:02.0 enP23632s1: Link up Jan 14 13:20:57.517636 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: Data path switched to VF: enP23632s1 Jan 14 13:20:57.958253 ignition[842]: Ignition 2.20.0 Jan 14 13:20:57.958268 ignition[842]: Stage: fetch-offline Jan 14 13:20:57.958313 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:57.958323 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:57.958436 ignition[842]: parsed url from cmdline: "" Jan 14 13:20:57.958441 ignition[842]: no config URL provided Jan 14 13:20:57.958447 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:20:57.958458 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:20:57.958464 ignition[842]: failed to fetch config: resource requires networking Jan 14 13:20:57.961714 ignition[842]: Ignition finished successfully Jan 14 13:20:57.978034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:20:57.987825 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:20:58.004311 ignition[880]: Ignition 2.20.0 Jan 14 13:20:58.004325 ignition[880]: Stage: fetch Jan 14 13:20:58.004553 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.004563 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.006115 ignition[880]: parsed url from cmdline: "" Jan 14 13:20:58.006121 ignition[880]: no config URL provided Jan 14 13:20:58.006129 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:20:58.006143 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:20:58.006177 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:20:58.076968 systemd-networkd[863]: enP23632s1: Gained carrier Jan 14 13:20:58.094461 ignition[880]: GET result: OK Jan 14 13:20:58.094528 ignition[880]: config has been read from IMDS userdata Jan 14 13:20:58.094550 ignition[880]: parsing config with SHA512: ea33efd18392494905465ebada64e858b48a7d8086d4931fddcc4a2d308cb584653584d28681a3ab464f838175c0436d66e7dd23f4eacc963a174df747476faf Jan 14 13:20:58.099764 unknown[880]: fetched base config from "system" Jan 14 13:20:58.099780 unknown[880]: fetched base config from "system" Jan 14 13:20:58.100177 ignition[880]: fetch: fetch complete Jan 14 13:20:58.099793 unknown[880]: fetched user config from "azure" Jan 14 13:20:58.100184 ignition[880]: fetch: fetch passed Jan 14 13:20:58.104372 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:20:58.100239 ignition[880]: Ignition finished successfully Jan 14 13:20:58.123804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:20:58.139026 ignition[887]: Ignition 2.20.0 Jan 14 13:20:58.139037 ignition[887]: Stage: kargs Jan 14 13:20:58.139267 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.139281 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.147192 ignition[887]: kargs: kargs passed Jan 14 13:20:58.147249 ignition[887]: Ignition finished successfully Jan 14 13:20:58.150114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:20:58.162795 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:20:58.174596 ignition[893]: Ignition 2.20.0 Jan 14 13:20:58.174624 ignition[893]: Stage: disks Jan 14 13:20:58.174851 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.174869 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.175640 ignition[893]: disks: disks passed Jan 14 13:20:58.175682 ignition[893]: Ignition finished successfully Jan 14 13:20:58.186821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:20:58.189402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:20:58.194109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:20:58.197080 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:20:58.202098 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:20:58.204587 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:20:58.222841 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:20:58.288915 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:20:58.294886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:20:58.309058 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:20:58.397685 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:20:58.398277 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:20:58.401192 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:20:58.441726 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:20:58.446831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:20:58.455821 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:20:58.457082 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) Jan 14 13:20:58.462588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:20:58.467623 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:58.473362 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:58.473386 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:20:58.474373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:20:58.487256 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:20:58.491794 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:20:58.495229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:20:58.518786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:20:58.716750 systemd-networkd[863]: eth0: Gained IPv6LL Jan 14 13:20:59.147382 coreos-metadata[914]: Jan 14 13:20:59.147 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:20:59.151456 coreos-metadata[914]: Jan 14 13:20:59.150 INFO Fetch successful Jan 14 13:20:59.151456 coreos-metadata[914]: Jan 14 13:20:59.150 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:20:59.158732 coreos-metadata[914]: Jan 14 13:20:59.157 INFO Fetch successful Jan 14 13:20:59.174317 coreos-metadata[914]: Jan 14 13:20:59.174 INFO wrote hostname ci-4152.2.0-a-950c255954 to /sysroot/etc/hostname Jan 14 13:20:59.179280 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:20:59.189584 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:20:59.213643 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:20:59.221596 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:20:59.229273 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:20:59.231293 systemd-networkd[863]: enP23632s1: Gained IPv6LL Jan 14 13:21:00.170936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:00.178715 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:00.186756 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:00.194182 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:00.197677 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:00.222299 ignition[1035]: INFO : Ignition 2.20.0 Jan 14 13:21:00.222299 ignition[1035]: INFO : Stage: mount Jan 14 13:21:00.232099 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:00.232099 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:00.232099 ignition[1035]: INFO : mount: mount passed Jan 14 13:21:00.232099 ignition[1035]: INFO : Ignition finished successfully Jan 14 13:21:00.223878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:00.228106 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:00.247859 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:00.255563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:00.280022 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Jan 14 13:21:00.280070 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:00.283232 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:00.285647 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:00.290674 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:00.292110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:00.312011 ignition[1064]: INFO : Ignition 2.20.0 Jan 14 13:21:00.312011 ignition[1064]: INFO : Stage: files Jan 14 13:21:00.316086 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:00.316086 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:00.316086 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:00.328180 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:00.328180 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:00.466504 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:00.470769 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:00.470769 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:00.467160 unknown[1064]: wrote ssh authorized keys file for user: core Jan 14 13:21:00.482967 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:00.487260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:00.517844 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:21:01.059155 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 14 13:21:01.246847 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:01.252446 ignition[1064]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:01.256818 ignition[1064]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:01.261029 ignition[1064]: INFO : files: files passed Jan 14 13:21:01.261029 ignition[1064]: INFO : Ignition finished successfully Jan 14 13:21:01.263912 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:01.274769 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:01.282793 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:01.286847 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:01.286934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:01.308702 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.308702 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.320433 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.314099 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:01.321054 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:01.335895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:01.358727 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:01.358840 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:01.367298 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:01.372259 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:01.377259 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:01.384779 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:01.399012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:01.413812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:01.425784 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:01.431721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:01.431922 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:01.432299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:01.432413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:01.433586 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:01.434419 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:01.434824 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:01.435227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:01.435647 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:01.436056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:01.436470 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:01.436858 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:01.437289 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:01.437698 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:01.438182 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:01.438312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:01.440678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:01.441116 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:01.441532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:01.451516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:01.480875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:01.486045 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:01.529420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:01.529655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:01.535577 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:01.535758 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:01.540860 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:01.541023 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:01.558858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:01.566910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:01.569733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:01.581730 ignition[1117]: INFO : Ignition 2.20.0 Jan 14 13:21:01.581730 ignition[1117]: INFO : Stage: umount Jan 14 13:21:01.581730 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:01.581730 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:01.581730 ignition[1117]: INFO : umount: umount passed Jan 14 13:21:01.581730 ignition[1117]: INFO : Ignition finished successfully Jan 14 13:21:01.569921 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:01.573138 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:01.573307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:01.589442 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:01.589532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:01.593953 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:01.594038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:01.599992 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:01.600047 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:01.605032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:01.605086 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:01.607495 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:01.607544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:01.612409 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:01.639576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:01.639674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:01.642590 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:01.644939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:01.652304 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:01.657625 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:01.665416 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:01.667736 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:01.667785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:01.671944 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:01.671991 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:01.674369 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:01.674432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:01.679935 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:01.682111 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:01.697819 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:01.702652 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:01.708681 systemd-networkd[863]: eth0: DHCPv6 lease lost Jan 14 13:21:01.708778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:01.711862 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:01.711970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:01.719330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:01.719407 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:01.732748 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:01.735036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:01.735090 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:01.738350 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:01.741379 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:01.741475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:01.749300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:01.749404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:01.768715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:01.768778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:01.772023 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:01.772063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:01.782766 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:01.782904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:01.790181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:01.790231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:01.793556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:01.793602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:01.808802 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:01.808873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:01.815809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:01.815874 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:01.822930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:01.822999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:01.837719 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: Data path switched from VF: enP23632s1 Jan 14 13:21:01.837770 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:01.840336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:01.840401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:01.843355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:21:01.843412 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:01.851826 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:01.851874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:01.854947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:01.857958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:01.877034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:01.877154 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:01.885429 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:01.885554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:02.467724 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:02.467886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:02.471177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:02.486240 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:02.486317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:02.499768 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:02.507803 systemd[1]: Switching root. Jan 14 13:21:02.596587 systemd-journald[177]: Journal stopped Jan 14 13:20:53.079304 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:20:53.079342 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.079356 kernel: BIOS-provided physical RAM map: Jan 14 13:20:53.079367 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:20:53.079376 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:20:53.079387 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:20:53.079398 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:20:53.079413 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:20:53.079422 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:20:53.079432 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:20:53.079440 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:20:53.079449 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:20:53.079456 kernel: NX (Execute Disable) protection: active Jan 14 13:20:53.079466 kernel: APIC: Static calls initialized Jan 14 13:20:53.079480 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:20:53.079488 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:20:53.079496 kernel: random: crng init done Jan 14 13:20:53.079505 kernel: secureboot: Secure boot disabled Jan 14 13:20:53.079572 kernel: SMBIOS 3.1.0 present. Jan 14 13:20:53.079581 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:20:53.079592 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:20:53.079599 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:20:53.079607 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:20:53.079616 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:20:53.079626 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:20:53.079636 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:20:53.079643 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:20:53.079651 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:20:53.079658 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:20:53.079669 kernel: tsc: Detected 2593.904 MHz processor Jan 14 13:20:53.079676 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:20:53.079686 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:20:53.079693 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:20:53.079706 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:20:53.079713 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:20:53.079721 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:20:53.079730 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:20:53.079738 kernel: Using GB pages for direct mapping Jan 14 13:20:53.079744 kernel: ACPI: Early table checksum verification disabled Jan 14 13:20:53.079752 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:20:53.079762 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079772 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079780 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:20:53.079787 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:20:53.079795 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079802 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079810 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079819 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079828 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079835 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079843 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:20:53.079851 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:20:53.079861 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:20:53.079868 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:20:53.079878 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:20:53.079887 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:20:53.079898 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:20:53.079907 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:20:53.079915 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:20:53.079925 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:20:53.079932 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:20:53.079942 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:20:53.079951 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:20:53.079961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:20:53.079972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:20:53.079982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:20:53.079990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:20:53.079998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:20:53.080008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:20:53.080015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:20:53.080026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:20:53.080033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:20:53.080043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:20:53.080054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:20:53.080062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:20:53.080072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:20:53.080082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:20:53.080091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:20:53.080101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:20:53.080112 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:20:53.080121 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:20:53.080130 kernel: Zone ranges: Jan 14 13:20:53.080142 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:20:53.080151 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:20:53.080161 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:20:53.080168 kernel: Movable zone start for each node Jan 14 13:20:53.080179 kernel: Early memory node ranges Jan 14 13:20:53.080187 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:20:53.080196 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:20:53.080205 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:20:53.080212 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:20:53.080225 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:20:53.080232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:20:53.080243 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:20:53.080251 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:20:53.080260 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:20:53.080269 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:20:53.080280 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:20:53.080287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:20:53.080297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:20:53.080308 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:20:53.080317 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:20:53.080330 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:20:53.080337 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:20:53.080347 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:20:53.080356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:20:53.080363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:20:53.080374 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:20:53.080381 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:20:53.080394 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:20:53.080402 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:20:53.080413 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.080422 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:20:53.080430 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:20:53.080440 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:20:53.080447 kernel: Fallback order for Node 0: 0 Jan 14 13:20:53.080457 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:20:53.080468 kernel: Policy zone: Normal Jan 14 13:20:53.080484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:20:53.080495 kernel: software IO TLB: area num 2. Jan 14 13:20:53.080505 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:20:53.080523 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:20:53.080534 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:20:53.080543 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:20:53.080552 kernel: Dynamic Preempt: voluntary Jan 14 13:20:53.080562 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:20:53.080571 kernel: rcu: RCU event tracing is enabled. Jan 14 13:20:53.080582 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:20:53.080594 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:20:53.080604 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:20:53.080612 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:20:53.080624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:20:53.080632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:20:53.080645 kernel: Using NULL legacy PIC Jan 14 13:20:53.080656 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:20:53.080665 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:20:53.080675 kernel: Console: colour dummy device 80x25 Jan 14 13:20:53.080684 kernel: printk: console [tty1] enabled Jan 14 13:20:53.080692 kernel: printk: console [ttyS0] enabled Jan 14 13:20:53.080703 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:20:53.080711 kernel: ACPI: Core revision 20230628 Jan 14 13:20:53.080722 kernel: Failed to register legacy timer interrupt Jan 14 13:20:53.080730 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:20:53.080744 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:20:53.080752 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:20:53.080763 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:20:53.080771 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:20:53.080781 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:20:53.080791 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:20:53.080799 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:20:53.080810 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:20:53.080818 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 14 13:20:53.080831 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:20:53.080839 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:20:53.080850 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:20:53.080858 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:20:53.080869 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:20:53.080878 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:20:53.080888 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:20:53.080897 kernel: RETBleed: Vulnerable Jan 14 13:20:53.080905 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:20:53.080916 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:20:53.080926 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:20:53.080937 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:20:53.080945 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:20:53.080956 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:20:53.080964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:20:53.080974 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:20:53.080983 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:20:53.080991 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:20:53.081001 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:20:53.081012 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:20:53.081020 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:20:53.081034 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:20:53.081042 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:20:53.081052 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:20:53.081061 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:20:53.081069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:20:53.081080 kernel: landlock: Up and running. Jan 14 13:20:53.081087 kernel: SELinux: Initializing. Jan 14 13:20:53.081098 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.081106 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.081117 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:20:53.081126 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081139 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:20:53.081158 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:20:53.081167 kernel: signal: max sigframe size: 3632 Jan 14 13:20:53.081175 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:20:53.081186 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:20:53.081194 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:20:53.081205 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:20:53.081213 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:20:53.081227 kernel: .... node #0, CPUs: #1 Jan 14 13:20:53.081235 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:20:53.081246 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:20:53.081254 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:20:53.081264 kernel: smpboot: Max logical packages: 1 Jan 14 13:20:53.081273 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 14 13:20:53.081282 kernel: devtmpfs: initialized Jan 14 13:20:53.081293 kernel: x86/mm: Memory block size: 128MB Jan 14 13:20:53.081303 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:20:53.081314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:20:53.081322 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:20:53.081333 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:20:53.081342 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:20:53.081350 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:20:53.081361 kernel: audit: type=2000 audit(1736860851.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:20:53.081372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:20:53.081380 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:20:53.081394 kernel: cpuidle: using governor menu Jan 14 13:20:53.081402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:20:53.081414 kernel: dca service started, version 1.12.1 Jan 14 13:20:53.081422 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:20:53.081432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:20:53.081441 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:20:53.081449 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:20:53.081460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:20:53.081468 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:20:53.081481 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:20:53.081489 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:20:53.081500 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:20:53.081510 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:20:53.081527 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:20:53.081540 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:20:53.081561 kernel: ACPI: Interpreter enabled Jan 14 13:20:53.081580 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:20:53.081597 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:20:53.081623 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:20:53.081640 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:20:53.081659 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:20:53.081674 kernel: iommu: Default domain type: Translated Jan 14 13:20:53.081689 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:20:53.081704 kernel: efivars: Registered efivars operations Jan 14 13:20:53.081718 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:20:53.081734 kernel: PCI: System does not support PCI Jan 14 13:20:53.081751 kernel: vgaarb: loaded Jan 14 13:20:53.081772 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:20:53.081790 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:20:53.081809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:20:53.081826 kernel: pnp: PnP ACPI init Jan 14 13:20:53.081843 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:20:53.081858 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:20:53.081874 kernel: NET: Registered PF_INET protocol family Jan 14 13:20:53.081890 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:20:53.081909 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:20:53.081928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:20:53.081942 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:20:53.081959 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:20:53.081975 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:20:53.081992 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.082008 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:20:53.082024 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:20:53.082041 kernel: NET: Registered PF_XDP protocol family Jan 14 13:20:53.082057 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:20:53.082078 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:20:53.082095 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:20:53.082113 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:20:53.082133 kernel: Initialise system trusted keyrings Jan 14 13:20:53.082148 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:20:53.082166 kernel: Key type asymmetric registered Jan 14 13:20:53.082182 kernel: Asymmetric key parser 'x509' registered Jan 14 13:20:53.082199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:20:53.082214 kernel: io scheduler mq-deadline registered Jan 14 13:20:53.082235 kernel: io scheduler kyber registered Jan 14 13:20:53.082252 kernel: io scheduler bfq registered Jan 14 13:20:53.082267 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:20:53.082280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:20:53.082293 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:20:53.082306 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:20:53.082328 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:20:53.082588 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:20:53.082712 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:20:52 UTC (1736860852) Jan 14 13:20:53.082803 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:20:53.082816 kernel: intel_pstate: CPU model not supported Jan 14 13:20:53.082825 kernel: efifb: probing for efifb Jan 14 13:20:53.082835 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:20:53.082844 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:20:53.082852 kernel: efifb: scrolling: redraw Jan 14 13:20:53.082863 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:20:53.082871 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:20:53.082885 kernel: fb0: EFI VGA frame buffer device Jan 14 13:20:53.082893 kernel: pstore: Using crash dump compression: deflate Jan 14 13:20:53.082904 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:20:53.082913 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:20:53.082923 kernel: Segment Routing with IPv6 Jan 14 13:20:53.082932 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:20:53.082941 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:20:53.082950 kernel: Key type dns_resolver registered Jan 14 13:20:53.082959 kernel: IPI shorthand broadcast: enabled Jan 14 13:20:53.082972 kernel: sched_clock: Marking stable (876003700, 59809400)->(1185346000, -249532900) Jan 14 13:20:53.082980 kernel: registered taskstats version 1 Jan 14 13:20:53.082991 kernel: Loading compiled-in X.509 certificates Jan 14 13:20:53.083001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:20:53.083010 kernel: Key type .fscrypt registered Jan 14 13:20:53.083021 kernel: Key type fscrypt-provisioning registered Jan 14 13:20:53.083032 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:20:53.083042 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:20:53.083055 kernel: ima: No architecture policies found Jan 14 13:20:53.083064 kernel: clk: Disabling unused clocks Jan 14 13:20:53.083073 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:20:53.083083 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:20:53.083091 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:20:53.083102 kernel: Run /init as init process Jan 14 13:20:53.083110 kernel: with arguments: Jan 14 13:20:53.083120 kernel: /init Jan 14 13:20:53.083128 kernel: with environment: Jan 14 13:20:53.083138 kernel: HOME=/ Jan 14 13:20:53.083149 kernel: TERM=linux Jan 14 13:20:53.083163 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:20:53.083173 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:20:53.083186 systemd[1]: Detected virtualization microsoft. Jan 14 13:20:53.083195 systemd[1]: Detected architecture x86-64. Jan 14 13:20:53.083206 systemd[1]: Running in initrd. Jan 14 13:20:53.083214 systemd[1]: No hostname configured, using default hostname. Jan 14 13:20:53.083227 systemd[1]: Hostname set to . Jan 14 13:20:53.083236 systemd[1]: Initializing machine ID from random generator. Jan 14 13:20:53.083247 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:20:53.083256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:20:53.083266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:20:53.083276 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:20:53.083287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:20:53.083296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:20:53.083309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:20:53.083320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:20:53.083330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:20:53.083340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:20:53.083349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:20:53.083359 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:20:53.083368 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:20:53.083381 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:20:53.083393 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:20:53.083404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:20:53.083415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:20:53.083425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:20:53.083435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:20:53.083446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:20:53.083455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:20:53.083466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:20:53.083477 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:20:53.083488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:20:53.083497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:20:53.083508 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:20:53.083530 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:20:53.083538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:20:53.083550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:20:53.083558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:53.083589 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:20:53.083612 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:20:53.083622 systemd-journald[177]: Journal started Jan 14 13:20:53.083645 systemd-journald[177]: Runtime Journal (/run/log/journal/1a52b557a08a49d087c328e9f30f3546) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:20:53.085212 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:20:53.094527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:20:53.103529 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:20:53.110640 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:20:53.124532 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:20:53.126256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:20:53.136581 kernel: Bridge firewalling registered Jan 14 13:20:53.136662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:20:53.139778 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:20:53.145267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:20:53.148300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:53.151475 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:20:53.163386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:20:53.176665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:53.180655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:53.181536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:20:53.201830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:20:53.206107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:53.219686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:20:53.224987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:53.232693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:20:53.253103 dracut-cmdline[214]: dracut-dracut-053 Jan 14 13:20:53.256859 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:20:53.297909 systemd-resolved[212]: Positive Trust Anchors: Jan 14 13:20:53.297925 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:20:53.297984 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:20:53.301603 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 14 13:20:53.302825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:20:53.305726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:20:53.337967 kernel: SCSI subsystem initialized Jan 14 13:20:53.348534 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:20:53.358542 kernel: iscsi: registered transport (tcp) Jan 14 13:20:53.379746 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:20:53.379835 kernel: QLogic iSCSI HBA Driver Jan 14 13:20:53.415195 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:20:53.423700 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:20:53.450798 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:20:53.450878 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:20:53.455537 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:20:53.493548 kernel: raid6: avx512x4 gen() 17978 MB/s Jan 14 13:20:53.515536 kernel: raid6: avx512x2 gen() 17975 MB/s Jan 14 13:20:53.534528 kernel: raid6: avx512x1 gen() 18055 MB/s Jan 14 13:20:53.553531 kernel: raid6: avx2x4 gen() 17838 MB/s Jan 14 13:20:53.572533 kernel: raid6: avx2x2 gen() 18013 MB/s Jan 14 13:20:53.592924 kernel: raid6: avx2x1 gen() 13832 MB/s Jan 14 13:20:53.592963 kernel: raid6: using algorithm avx512x1 gen() 18055 MB/s Jan 14 13:20:53.614906 kernel: raid6: .... xor() 25847 MB/s, rmw enabled Jan 14 13:20:53.614950 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:20:53.636536 kernel: xor: automatically using best checksumming function avx Jan 14 13:20:53.782539 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:20:53.792420 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:20:53.802679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:20:53.815969 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:20:53.820425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:20:53.832533 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:20:53.848771 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 14 13:20:53.876792 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:20:53.886652 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:20:53.926126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:20:53.942487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:20:53.973817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:20:53.978046 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:20:53.986427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:20:53.992328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:20:54.003721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:20:54.007916 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:20:54.033137 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:20:54.042483 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:20:54.042524 kernel: AES CTR mode by8 optimization enabled Jan 14 13:20:54.053085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:20:54.058282 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:20:54.057162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:54.064803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:54.067592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:20:54.067859 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.070682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.086167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.098084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:20:54.100717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.112652 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:20:54.112711 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:20:54.118410 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:20:54.117851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:20:54.126548 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:20:54.133552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:20:54.140533 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:20:54.148680 kernel: PTP clock support registered Jan 14 13:20:54.153118 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:20:54.158533 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:20:54.159578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:20:54.170359 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:20:54.182629 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:20:54.172837 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:20:54.189427 kernel: scsi host0: storvsc_host_t Jan 14 13:20:54.189492 kernel: scsi host1: storvsc_host_t Jan 14 13:20:54.193533 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:20:54.193561 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:20:54.193590 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:20:54.200208 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:20:54.300647 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:20:54.300689 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:20:54.300810 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:20:54.300686 systemd-resolved[212]: Clock change detected. Flushing caches. Jan 14 13:20:54.324531 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:20:54.328749 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:20:54.328774 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:20:54.324987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:20:54.343301 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:20:54.358134 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:20:54.358325 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:20:54.358482 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:20:54.358660 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:20:54.358816 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:54.358836 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:20:54.876618 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:20:54.940649 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Jan 14 13:20:54.955253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:20:54.979629 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (450) Jan 14 13:20:54.992720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:20:55.001209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:20:55.015674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:20:55.031769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:20:55.047624 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:55.054631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:56.061638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:20:56.062221 disk-uuid[592]: The operation has completed successfully. Jan 14 13:20:56.145876 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:20:56.145989 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:20:56.161150 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:20:56.169318 sh[679]: Success Jan 14 13:20:56.201239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:20:56.394962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:20:56.405727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:20:56.410635 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:20:56.427408 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:20:56.427471 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:56.430870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:20:56.433466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:20:56.435769 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:20:56.794632 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: VF slot 1 added Jan 14 13:20:56.796530 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:20:56.799872 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:20:56.812430 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:20:56.812624 kernel: hv_pci a03696ac-5c50-43c0-be44-65b2f0002e34: PCI VMBus probing: Using version 0x10004 Jan 14 13:20:56.871944 kernel: hv_pci a03696ac-5c50-43c0-be44-65b2f0002e34: PCI host bridge to bus 5c50:00 Jan 14 13:20:56.872255 kernel: pci_bus 5c50:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:20:56.872446 kernel: pci_bus 5c50:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:20:56.872598 kernel: pci 5c50:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:20:56.873183 kernel: pci 5c50:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:20:56.873352 kernel: pci 5c50:00:02.0: enabling Extended Tags Jan 14 13:20:56.873519 kernel: pci 5c50:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c50:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:20:56.873722 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:56.873743 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:56.873762 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:20:56.873781 kernel: pci_bus 5c50:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:20:56.873941 kernel: pci 5c50:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:20:56.817245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:20:56.830816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:20:56.906630 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:20:56.924629 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:56.924837 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:20:56.937628 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:20:56.949219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:20:56.972911 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:20:56.988463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:20:57.011109 systemd-networkd[863]: lo: Link UP Jan 14 13:20:57.013194 systemd-networkd[863]: lo: Gained carrier Jan 14 13:20:57.018075 systemd-networkd[863]: Enumeration completed Jan 14 13:20:57.020347 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:20:57.023254 systemd[1]: Reached target network.target - Network. Jan 14 13:20:57.030360 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:20:57.030365 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:20:57.039962 systemd-networkd[863]: eth0: Link UP Jan 14 13:20:57.042145 systemd-networkd[863]: eth0: Gained carrier Jan 14 13:20:57.042159 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:20:57.109944 kernel: mlx5_core 5c50:00:02.0: enabling device (0000 -> 0002) Jan 14 13:20:57.343017 kernel: mlx5_core 5c50:00:02.0: firmware version: 14.30.5000 Jan 14 13:20:57.343233 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: VF registering: eth1 Jan 14 13:20:57.343396 kernel: mlx5_core 5c50:00:02.0 eth1: joined to eth0 Jan 14 13:20:57.343585 kernel: mlx5_core 5c50:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:20:57.129661 systemd-networkd[863]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:20:57.353838 kernel: mlx5_core 5c50:00:02.0 enP23632s1: renamed from eth1 Jan 14 13:20:57.359862 systemd-networkd[863]: eth1: Interface name change detected, renamed to enP23632s1. Jan 14 13:20:57.488380 systemd-networkd[863]: enP23632s1: Link UP Jan 14 13:20:57.490745 kernel: mlx5_core 5c50:00:02.0 enP23632s1: Link up Jan 14 13:20:57.517636 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: Data path switched to VF: enP23632s1 Jan 14 13:20:57.958253 ignition[842]: Ignition 2.20.0 Jan 14 13:20:57.958268 ignition[842]: Stage: fetch-offline Jan 14 13:20:57.958313 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:57.958323 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:57.958436 ignition[842]: parsed url from cmdline: "" Jan 14 13:20:57.958441 ignition[842]: no config URL provided Jan 14 13:20:57.958447 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:20:57.958458 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:20:57.958464 ignition[842]: failed to fetch config: resource requires networking Jan 14 13:20:57.961714 ignition[842]: Ignition finished successfully Jan 14 13:20:57.978034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:20:57.987825 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:20:58.004311 ignition[880]: Ignition 2.20.0 Jan 14 13:20:58.004325 ignition[880]: Stage: fetch Jan 14 13:20:58.004553 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.004563 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.006115 ignition[880]: parsed url from cmdline: "" Jan 14 13:20:58.006121 ignition[880]: no config URL provided Jan 14 13:20:58.006129 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:20:58.006143 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:20:58.006177 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:20:58.076968 systemd-networkd[863]: enP23632s1: Gained carrier Jan 14 13:20:58.094461 ignition[880]: GET result: OK Jan 14 13:20:58.094528 ignition[880]: config has been read from IMDS userdata Jan 14 13:20:58.094550 ignition[880]: parsing config with SHA512: ea33efd18392494905465ebada64e858b48a7d8086d4931fddcc4a2d308cb584653584d28681a3ab464f838175c0436d66e7dd23f4eacc963a174df747476faf Jan 14 13:20:58.099764 unknown[880]: fetched base config from "system" Jan 14 13:20:58.099780 unknown[880]: fetched base config from "system" Jan 14 13:20:58.100177 ignition[880]: fetch: fetch complete Jan 14 13:20:58.099793 unknown[880]: fetched user config from "azure" Jan 14 13:20:58.100184 ignition[880]: fetch: fetch passed Jan 14 13:20:58.104372 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:20:58.100239 ignition[880]: Ignition finished successfully Jan 14 13:20:58.123804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:20:58.139026 ignition[887]: Ignition 2.20.0 Jan 14 13:20:58.139037 ignition[887]: Stage: kargs Jan 14 13:20:58.139267 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.139281 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.147192 ignition[887]: kargs: kargs passed Jan 14 13:20:58.147249 ignition[887]: Ignition finished successfully Jan 14 13:20:58.150114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:20:58.162795 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:20:58.174596 ignition[893]: Ignition 2.20.0 Jan 14 13:20:58.174624 ignition[893]: Stage: disks Jan 14 13:20:58.174851 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:20:58.174869 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:20:58.175640 ignition[893]: disks: disks passed Jan 14 13:20:58.175682 ignition[893]: Ignition finished successfully Jan 14 13:20:58.186821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:20:58.189402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:20:58.194109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:20:58.197080 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:20:58.202098 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:20:58.204587 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:20:58.222841 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:20:58.288915 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:20:58.294886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:20:58.309058 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:20:58.397685 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:20:58.398277 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:20:58.401192 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:20:58.441726 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:20:58.446831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:20:58.455821 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:20:58.457082 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) Jan 14 13:20:58.462588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:20:58.467623 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:20:58.473362 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:20:58.473386 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:20:58.474373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:20:58.487256 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:20:58.491794 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:20:58.495229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:20:58.518786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:20:58.716750 systemd-networkd[863]: eth0: Gained IPv6LL Jan 14 13:20:59.147382 coreos-metadata[914]: Jan 14 13:20:59.147 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:20:59.151456 coreos-metadata[914]: Jan 14 13:20:59.150 INFO Fetch successful Jan 14 13:20:59.151456 coreos-metadata[914]: Jan 14 13:20:59.150 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:20:59.158732 coreos-metadata[914]: Jan 14 13:20:59.157 INFO Fetch successful Jan 14 13:20:59.174317 coreos-metadata[914]: Jan 14 13:20:59.174 INFO wrote hostname ci-4152.2.0-a-950c255954 to /sysroot/etc/hostname Jan 14 13:20:59.179280 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:20:59.189584 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:20:59.213643 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:20:59.221596 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:20:59.229273 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:20:59.231293 systemd-networkd[863]: enP23632s1: Gained IPv6LL Jan 14 13:21:00.170936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:00.178715 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:00.186756 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:00.194182 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:00.197677 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:00.222299 ignition[1035]: INFO : Ignition 2.20.0 Jan 14 13:21:00.222299 ignition[1035]: INFO : Stage: mount Jan 14 13:21:00.232099 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:00.232099 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:00.232099 ignition[1035]: INFO : mount: mount passed Jan 14 13:21:00.232099 ignition[1035]: INFO : Ignition finished successfully Jan 14 13:21:00.223878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:00.228106 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:00.247859 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:00.255563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:00.280022 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) Jan 14 13:21:00.280070 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:00.283232 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:00.285647 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:00.290674 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:00.292110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:00.312011 ignition[1064]: INFO : Ignition 2.20.0 Jan 14 13:21:00.312011 ignition[1064]: INFO : Stage: files Jan 14 13:21:00.316086 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:00.316086 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:00.316086 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:00.328180 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:00.328180 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:00.466504 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:00.470769 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:00.470769 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:00.467160 unknown[1064]: wrote ssh authorized keys file for user: core Jan 14 13:21:00.482967 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:00.487260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:00.517844 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:00.523260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:21:01.059155 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 14 13:21:01.246847 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:01.252446 ignition[1064]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:01.256818 ignition[1064]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:01.261029 ignition[1064]: INFO : files: files passed Jan 14 13:21:01.261029 ignition[1064]: INFO : Ignition finished successfully Jan 14 13:21:01.263912 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:01.274769 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:01.282793 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:01.286847 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:01.286934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:01.308702 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.308702 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.320433 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:01.314099 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:01.321054 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:01.335895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:01.358727 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:01.358840 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:01.367298 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:01.372259 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:01.377259 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:01.384779 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:01.399012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:01.413812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:01.425784 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:01.431721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:01.431922 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:01.432299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:01.432413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:01.433586 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:01.434419 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:01.434824 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:01.435227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:01.435647 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:01.436056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:01.436470 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:01.436858 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:01.437289 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:01.437698 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:01.438182 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:01.438312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:01.440678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:01.441116 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:01.441532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:01.451516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:01.480875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:01.486045 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:01.529420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:01.529655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:01.535577 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:01.535758 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:01.540860 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:01.541023 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:01.558858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:01.566910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:01.569733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:01.581730 ignition[1117]: INFO : Ignition 2.20.0 Jan 14 13:21:01.581730 ignition[1117]: INFO : Stage: umount Jan 14 13:21:01.581730 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:01.581730 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:01.581730 ignition[1117]: INFO : umount: umount passed Jan 14 13:21:01.581730 ignition[1117]: INFO : Ignition finished successfully Jan 14 13:21:01.569921 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:01.573138 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:01.573307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:01.589442 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:01.589532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:01.593953 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:01.594038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:01.599992 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:01.600047 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:01.605032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:01.605086 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:01.607495 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:01.607544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:01.612409 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:01.639576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:01.639674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:01.642590 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:01.644939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:01.652304 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:01.657625 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:01.665416 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:01.667736 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:01.667785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:01.671944 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:01.671991 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:01.674369 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:01.674432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:01.679935 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:01.682111 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:01.697819 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:01.702652 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:01.708681 systemd-networkd[863]: eth0: DHCPv6 lease lost Jan 14 13:21:01.708778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:01.711862 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:01.711970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:01.719330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:01.719407 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:01.732748 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:01.735036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:01.735090 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:01.738350 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:01.741379 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:01.741475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:01.749300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:01.749404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:01.768715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:01.768778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:01.772023 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:01.772063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:01.782766 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:01.782904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:01.790181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:01.790231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:01.793556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:01.793602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:01.808802 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:01.808873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:01.815809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:01.815874 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:01.822930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:01.822999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:01.837719 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: Data path switched from VF: enP23632s1 Jan 14 13:21:01.837770 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:01.840336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:01.840401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:01.843355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:21:01.843412 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:01.851826 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:01.851874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:01.854947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:01.857958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:01.877034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:01.877154 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:01.885429 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:01.885554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:02.467724 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:02.467886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:02.471177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:02.486240 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:02.486317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:02.499768 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:02.507803 systemd[1]: Switching root. Jan 14 13:21:02.596587 systemd-journald[177]: Journal stopped Jan 14 13:21:08.125262 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:21:08.125291 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:21:08.125302 kernel: SELinux: policy capability open_perms=1 Jan 14 13:21:08.125311 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:21:08.125318 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:21:08.125326 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:21:08.125334 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:21:08.125346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:21:08.125355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:21:08.125366 kernel: audit: type=1403 audit(1736860863.931:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:21:08.125376 systemd[1]: Successfully loaded SELinux policy in 164.781ms. Jan 14 13:21:08.125388 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.071ms. Jan 14 13:21:08.125401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:08.125414 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:08.125426 systemd[1]: Detected architecture x86-64. Jan 14 13:21:08.125435 systemd[1]: Detected first boot. Jan 14 13:21:08.125445 systemd[1]: Hostname set to . Jan 14 13:21:08.125457 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:08.125467 zram_generator::config[1162]: No configuration found. Jan 14 13:21:08.125482 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:21:08.125491 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:21:08.125504 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:21:08.125514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:21:08.125527 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:21:08.125537 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:21:08.125549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:21:08.125563 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:21:08.125574 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:21:08.125587 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:21:08.125598 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:21:08.125617 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:21:08.125630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:08.125642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:08.125652 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:21:08.125666 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:21:08.125676 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:21:08.125689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:08.125699 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:21:08.125711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:08.125721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:21:08.125737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:21:08.125748 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:08.125762 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:21:08.125774 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:08.125785 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:08.125796 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:08.125807 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:08.125818 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:21:08.125830 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:21:08.125843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:08.125856 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:08.125867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:08.125880 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:21:08.125890 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:21:08.125905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:21:08.125916 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:21:08.125929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:08.125940 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:21:08.125952 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:21:08.125964 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:21:08.125976 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:21:08.125989 systemd[1]: Reached target machines.target - Containers. Jan 14 13:21:08.126001 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:21:08.126015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:08.126028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:08.126038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:21:08.126051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:08.126062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:08.126076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:08.126086 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:21:08.126100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:08.126114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:21:08.126125 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:21:08.126139 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:21:08.126149 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:21:08.126162 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:21:08.126172 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:08.126185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:08.126196 kernel: fuse: init (API version 7.39) Jan 14 13:21:08.126210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:21:08.126222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:21:08.126233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:08.126246 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:21:08.126256 systemd[1]: Stopped verity-setup.service. Jan 14 13:21:08.126283 systemd-journald[1247]: Collecting audit messages is disabled. Jan 14 13:21:08.126311 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:08.126322 systemd-journald[1247]: Journal started Jan 14 13:21:08.126346 systemd-journald[1247]: Runtime Journal (/run/log/journal/e7f32f2f31b048748b2f55841e8c40cf) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:07.354128 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:21:07.558976 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:21:07.559443 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:21:08.136630 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:08.136673 kernel: loop: module loaded Jan 14 13:21:08.142295 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:21:08.144948 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:21:08.147780 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:21:08.150279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:21:08.153125 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:21:08.156037 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:21:08.158889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:21:08.162403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:08.165933 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:21:08.166088 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:21:08.169854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:08.170029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:08.173480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:08.173724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:08.177489 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:21:08.177693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:21:08.182340 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:08.182527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:08.185808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:21:08.191357 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:21:08.202284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:08.222245 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:21:08.235717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:21:08.243680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:21:08.248174 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:21:08.248280 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:08.253003 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:21:08.263320 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:21:08.269782 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:21:08.274219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:08.286320 kernel: ACPI: bus type drm_connector registered Jan 14 13:21:08.284134 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:21:08.288499 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:21:08.291718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:08.295745 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:21:08.298624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:08.305834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:08.309755 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:21:08.324681 systemd-journald[1247]: Time spent on flushing to /var/log/journal/e7f32f2f31b048748b2f55841e8c40cf is 47.755ms for 939 entries. Jan 14 13:21:08.324681 systemd-journald[1247]: System Journal (/var/log/journal/e7f32f2f31b048748b2f55841e8c40cf) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:21:08.393181 systemd-journald[1247]: Received client request to flush runtime journal. Jan 14 13:21:08.321749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:08.332101 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:08.332291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:08.337551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:08.343494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:21:08.346674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:21:08.349841 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:21:08.364102 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:21:08.367637 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:21:08.371804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:21:08.383653 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:21:08.396172 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:21:08.404073 udevadm[1306]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 13:21:08.423628 kernel: loop0: detected capacity change from 0 to 138184 Jan 14 13:21:08.440267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:08.473763 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 13:21:08.473791 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 13:21:08.480493 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:08.490794 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:21:08.505346 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:21:08.506959 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:21:08.673284 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:21:08.680859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:08.699412 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 13:21:08.699437 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 13:21:08.704908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:08.906640 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:21:08.929637 kernel: loop1: detected capacity change from 0 to 211296 Jan 14 13:21:09.005651 kernel: loop2: detected capacity change from 0 to 140992 Jan 14 13:21:09.693636 kernel: loop3: detected capacity change from 0 to 28272 Jan 14 13:21:09.975755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:21:09.985797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:10.010717 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 14 13:21:10.159639 kernel: loop4: detected capacity change from 0 to 138184 Jan 14 13:21:10.171640 kernel: loop5: detected capacity change from 0 to 211296 Jan 14 13:21:10.180636 kernel: loop6: detected capacity change from 0 to 140992 Jan 14 13:21:10.195635 kernel: loop7: detected capacity change from 0 to 28272 Jan 14 13:21:10.200792 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:21:10.201342 (sd-merge)[1328]: Merged extensions into '/usr'. Jan 14 13:21:10.205161 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:21:10.205179 systemd[1]: Reloading... Jan 14 13:21:10.290669 zram_generator::config[1353]: No configuration found. Jan 14 13:21:10.512628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:10.600678 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:21:10.603943 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:21:10.609651 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:21:10.655652 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:21:10.676683 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:21:10.682629 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:21:10.683132 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:21:10.684237 systemd[1]: Reloading finished in 478 ms. Jan 14 13:21:10.690301 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:21:10.694415 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:10.715211 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:10.719863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:21:10.758819 systemd[1]: Starting ensure-sysext.service... Jan 14 13:21:10.781832 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:10.801817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:10.822266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:10.848842 systemd[1]: Reloading requested from client PID 1455 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:21:10.848859 systemd[1]: Reloading... Jan 14 13:21:10.914282 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:21:10.915233 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:21:10.916686 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:21:10.917254 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Jan 14 13:21:10.917449 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Jan 14 13:21:10.991646 zram_generator::config[1495]: No configuration found. Jan 14 13:21:11.016817 systemd-tmpfiles[1457]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:11.016831 systemd-tmpfiles[1457]: Skipping /boot Jan 14 13:21:11.049772 systemd-tmpfiles[1457]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:11.051245 systemd-tmpfiles[1457]: Skipping /boot Jan 14 13:21:11.126642 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1411) Jan 14 13:21:11.216272 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:21:11.325373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:11.403045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:11.406983 systemd[1]: Reloading finished in 557 ms. Jan 14 13:21:11.429184 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:21:11.437224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:11.465345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.471091 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:21:11.494215 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:21:11.494701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:11.498711 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:21:11.503390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:11.506969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:11.512117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:11.512298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:11.527408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:21:11.533704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:21:11.539048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:11.544829 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:21:11.555924 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:21:11.558722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.561800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:11.561984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:11.569095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:11.569673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:11.576358 lvm[1605]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:11.578131 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:11.578307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:11.590766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.591078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:11.600728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:11.611128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:11.617603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:11.620182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:11.620364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.623693 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:21:11.627569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:11.627765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:11.631138 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:11.631678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:11.652627 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:21:11.659169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:11.659361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:11.672436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:11.680219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.680706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:11.686060 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:21:11.692308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:11.698589 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:11.709382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:11.709787 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:11.728882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:11.729165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:11.729443 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:21:11.729964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:11.732132 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:21:11.735188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:21:11.736123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:11.736249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:11.744323 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:21:11.754778 systemd[1]: Finished ensure-sysext.service. Jan 14 13:21:11.756854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:21:11.760176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:11.760357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:11.760558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:11.762849 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:11.762973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:11.763243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:11.772966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:11.776411 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:11.777477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:11.843845 augenrules[1667]: No rules Jan 14 13:21:11.845021 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:21:11.845249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:21:11.874503 systemd-resolved[1612]: Positive Trust Anchors: Jan 14 13:21:11.875287 systemd-resolved[1612]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:11.875340 systemd-resolved[1612]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:11.900407 systemd-resolved[1612]: Using system hostname 'ci-4152.2.0-a-950c255954'. Jan 14 13:21:11.903080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:11.908636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:11.924719 systemd-networkd[1456]: lo: Link UP Jan 14 13:21:11.924728 systemd-networkd[1456]: lo: Gained carrier Jan 14 13:21:11.927208 systemd-networkd[1456]: Enumeration completed Jan 14 13:21:11.927338 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:11.927643 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:11.927647 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:11.930464 systemd[1]: Reached target network.target - Network. Jan 14 13:21:11.938805 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:21:11.985657 kernel: mlx5_core 5c50:00:02.0 enP23632s1: Link up Jan 14 13:21:12.006649 kernel: hv_netvsc 7c1e522f-37f7-7c1e-522f-37f77c1e522f eth0: Data path switched to VF: enP23632s1 Jan 14 13:21:12.007920 systemd-networkd[1456]: enP23632s1: Link UP Jan 14 13:21:12.008063 systemd-networkd[1456]: eth0: Link UP Jan 14 13:21:12.008069 systemd-networkd[1456]: eth0: Gained carrier Jan 14 13:21:12.008095 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:12.011961 systemd-networkd[1456]: enP23632s1: Gained carrier Jan 14 13:21:12.031677 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:12.387390 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:21:12.392424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:21:13.500876 systemd-networkd[1456]: eth0: Gained IPv6LL Jan 14 13:21:13.504399 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:21:13.508154 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:21:14.012887 systemd-networkd[1456]: enP23632s1: Gained IPv6LL Jan 14 13:21:14.884673 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:21:14.900447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:21:14.908900 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:21:14.936858 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:21:14.940056 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:14.942934 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:21:14.946235 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:21:14.949517 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:21:14.952270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:21:14.955247 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:21:14.958234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:21:14.958277 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:14.960502 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:14.963392 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:21:14.967495 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:21:14.977589 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:21:14.980909 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:21:14.983581 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:14.986054 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:14.988641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:14.988703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:15.015743 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:21:15.020769 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:21:15.031798 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:21:15.037791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:21:15.052708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:21:15.061793 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:21:15.067935 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:21:15.068009 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:21:15.069489 jq[1689]: false Jan 14 13:21:15.072790 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:21:15.075455 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:21:15.076875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:15.083801 KVP[1691]: KVP starting; pid is:1691 Jan 14 13:21:15.083803 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:21:15.087750 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:21:15.090806 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:21:15.099798 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:21:15.105786 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:21:15.109203 KVP[1691]: KVP LIC Version: 3.1 Jan 14 13:21:15.110654 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:21:15.117846 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:21:15.121141 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:21:15.121825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:21:15.126890 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:21:15.134522 chronyd[1701]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:21:15.140789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:21:15.147738 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:21:15.147986 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:21:15.160439 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:21:15.160902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:21:15.164157 extend-filesystems[1690]: Found loop4 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found loop5 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found loop6 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found loop7 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda1 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda2 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda3 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found usr Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda4 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda6 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda7 Jan 14 13:21:15.164157 extend-filesystems[1690]: Found sda9 Jan 14 13:21:15.164157 extend-filesystems[1690]: Checking size of /dev/sda9 Jan 14 13:21:15.207437 jq[1703]: true Jan 14 13:21:15.213691 chronyd[1701]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:21:15.213940 chronyd[1701]: Loaded seccomp filter (level 2) Jan 14 13:21:15.216761 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:21:15.238968 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:21:15.246929 jq[1712]: true Jan 14 13:21:15.254140 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:21:15.254373 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:21:15.267638 extend-filesystems[1690]: Old size kept for /dev/sda9 Jan 14 13:21:15.267638 extend-filesystems[1690]: Found sr0 Jan 14 13:21:15.288423 update_engine[1700]: I20250114 13:21:15.285014 1700 main.cc:92] Flatcar Update Engine starting Jan 14 13:21:15.270449 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:21:15.270778 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:21:15.280976 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:21:15.294127 dbus-daemon[1685]: [system] SELinux support is enabled Jan 14 13:21:15.294300 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:21:15.303335 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:21:15.303378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:21:15.307844 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:21:15.307874 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:21:15.323712 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:21:15.328974 update_engine[1700]: I20250114 13:21:15.326467 1700 update_check_scheduler.cc:74] Next update check in 7m44s Jan 14 13:21:15.329513 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:21:15.329791 systemd-logind[1698]: New seat seat0. Jan 14 13:21:15.334772 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:21:15.337847 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:21:15.373587 bash[1750]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:21:15.395457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:21:15.407077 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:21:15.439017 coreos-metadata[1684]: Jan 14 13:21:15.438 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:15.442842 coreos-metadata[1684]: Jan 14 13:21:15.442 INFO Fetch successful Jan 14 13:21:15.443293 coreos-metadata[1684]: Jan 14 13:21:15.443 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:21:15.448725 coreos-metadata[1684]: Jan 14 13:21:15.448 INFO Fetch successful Jan 14 13:21:15.448725 coreos-metadata[1684]: Jan 14 13:21:15.448 INFO Fetching http://168.63.129.16/machine/6e353c21-e82c-4d83-80fb-25f346cb3933/dbac68c0%2D1fb5%2D404e%2Dad46%2D71ce1c04ed70.%5Fci%2D4152.2.0%2Da%2D950c255954?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:21:15.450653 coreos-metadata[1684]: Jan 14 13:21:15.450 INFO Fetch successful Jan 14 13:21:15.451017 coreos-metadata[1684]: Jan 14 13:21:15.450 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:15.457630 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1752) Jan 14 13:21:15.459952 coreos-metadata[1684]: Jan 14 13:21:15.459 INFO Fetch successful Jan 14 13:21:15.525050 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:21:15.529754 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:21:15.787555 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:21:15.805403 sshd_keygen[1731]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:21:15.829673 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:21:15.839959 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:21:15.850043 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:21:15.866157 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:21:15.866387 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:21:15.878019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:21:15.881754 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:21:15.904798 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:21:15.915895 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:21:15.919839 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:21:15.924195 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:21:16.585788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:16.597044 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:16.878701 containerd[1714]: time="2025-01-14T13:21:16.878395300Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:21:16.909155 containerd[1714]: time="2025-01-14T13:21:16.909110000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.911306 containerd[1714]: time="2025-01-14T13:21:16.911259900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:16.911306 containerd[1714]: time="2025-01-14T13:21:16.911292000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:21:16.911445 containerd[1714]: time="2025-01-14T13:21:16.911313100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:21:16.911505 containerd[1714]: time="2025-01-14T13:21:16.911481700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:21:16.911547 containerd[1714]: time="2025-01-14T13:21:16.911508300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.911624 containerd[1714]: time="2025-01-14T13:21:16.911584600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:16.911686 containerd[1714]: time="2025-01-14T13:21:16.911622800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.911841900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.911868400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.911889200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.911902800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.912004600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.912222400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.912359900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:16.912443 containerd[1714]: time="2025-01-14T13:21:16.912377500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:21:16.912807 containerd[1714]: time="2025-01-14T13:21:16.912462600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:21:16.912807 containerd[1714]: time="2025-01-14T13:21:16.912517300Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:21:16.925109 containerd[1714]: time="2025-01-14T13:21:16.925076700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:21:16.925211 containerd[1714]: time="2025-01-14T13:21:16.925131700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:21:16.925211 containerd[1714]: time="2025-01-14T13:21:16.925151300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:21:16.925211 containerd[1714]: time="2025-01-14T13:21:16.925171000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:21:16.925211 containerd[1714]: time="2025-01-14T13:21:16.925188700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:21:16.925346 containerd[1714]: time="2025-01-14T13:21:16.925333900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:21:16.925572 containerd[1714]: time="2025-01-14T13:21:16.925543200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:21:16.925709 containerd[1714]: time="2025-01-14T13:21:16.925691300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:21:16.925768 containerd[1714]: time="2025-01-14T13:21:16.925716500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:21:16.925768 containerd[1714]: time="2025-01-14T13:21:16.925736300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:21:16.925768 containerd[1714]: time="2025-01-14T13:21:16.925755300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925774300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925791700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925810800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925830000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925848000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.925872 containerd[1714]: time="2025-01-14T13:21:16.925866600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925883000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925909100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925928000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925944800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925962200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925979200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.925997300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.926013200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.926040700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926061 containerd[1714]: time="2025-01-14T13:21:16.926060500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926080600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926097100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926113500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926130100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926151800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926178400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926196700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926211400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926262800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926285100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926299800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926317900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:21:16.926393 containerd[1714]: time="2025-01-14T13:21:16.926331500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.926912 containerd[1714]: time="2025-01-14T13:21:16.926348800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:21:16.926912 containerd[1714]: time="2025-01-14T13:21:16.926361800Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:21:16.926912 containerd[1714]: time="2025-01-14T13:21:16.926376900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:21:16.930385 containerd[1714]: time="2025-01-14T13:21:16.930319700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:21:16.930385 containerd[1714]: time="2025-01-14T13:21:16.930375800Z" level=info msg="Connect containerd service" Jan 14 13:21:16.931688 containerd[1714]: time="2025-01-14T13:21:16.930416100Z" level=info msg="using legacy CRI server" Jan 14 13:21:16.931688 containerd[1714]: time="2025-01-14T13:21:16.930426300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:21:16.931688 containerd[1714]: time="2025-01-14T13:21:16.930628000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:21:16.932789 containerd[1714]: time="2025-01-14T13:21:16.931973000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:21:16.933333 containerd[1714]: time="2025-01-14T13:21:16.933293300Z" level=info msg="Start subscribing containerd event" Jan 14 13:21:16.933333 containerd[1714]: time="2025-01-14T13:21:16.933347900Z" level=info msg="Start recovering state" Jan 14 13:21:16.933467 containerd[1714]: time="2025-01-14T13:21:16.933419000Z" level=info msg="Start event monitor" Jan 14 13:21:16.933467 containerd[1714]: time="2025-01-14T13:21:16.933433300Z" level=info msg="Start snapshots syncer" Jan 14 13:21:16.933467 containerd[1714]: time="2025-01-14T13:21:16.933445000Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:21:16.933467 containerd[1714]: time="2025-01-14T13:21:16.933454600Z" level=info msg="Start streaming server" Jan 14 13:21:16.934354 containerd[1714]: time="2025-01-14T13:21:16.934220000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:21:16.934354 containerd[1714]: time="2025-01-14T13:21:16.934293900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:21:16.934660 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:21:16.946892 containerd[1714]: time="2025-01-14T13:21:16.940073400Z" level=info msg="containerd successfully booted in 0.062784s" Jan 14 13:21:16.940477 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:21:16.943332 systemd[1]: Startup finished in 704ms (firmware) + 32.468s (loader) + 1.018s (kernel) + 10.977s (initrd) + 13.174s (userspace) = 58.343s. Jan 14 13:21:17.391190 login[1846]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:17.396434 login[1847]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:17.412194 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:21:17.412977 systemd-logind[1698]: New session 1 of user core. Jan 14 13:21:17.422921 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:21:17.426763 systemd-logind[1698]: New session 2 of user core. Jan 14 13:21:17.442120 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:21:17.450967 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:21:17.459298 (systemd)[1873]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:21:17.478225 kubelet[1857]: E0114 13:21:17.478150 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:17.480856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:17.481025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:17.655898 systemd[1873]: Queued start job for default target default.target. Jan 14 13:21:17.665876 systemd[1873]: Created slice app.slice - User Application Slice. Jan 14 13:21:17.665926 systemd[1873]: Reached target paths.target - Paths. Jan 14 13:21:17.665946 systemd[1873]: Reached target timers.target - Timers. Jan 14 13:21:17.668741 systemd[1873]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:21:17.685276 systemd[1873]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:21:17.685410 systemd[1873]: Reached target sockets.target - Sockets. Jan 14 13:21:17.685429 systemd[1873]: Reached target basic.target - Basic System. Jan 14 13:21:17.685472 systemd[1873]: Reached target default.target - Main User Target. Jan 14 13:21:17.685507 systemd[1873]: Startup finished in 218ms. Jan 14 13:21:17.686101 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:21:17.697816 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:21:17.700209 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:21:18.163826 waagent[1843]: 2025-01-14T13:21:18.163731Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:21:18.166778 waagent[1843]: 2025-01-14T13:21:18.166695Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:21:18.168949 waagent[1843]: 2025-01-14T13:21:18.168897Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:21:18.171348 waagent[1843]: 2025-01-14T13:21:18.171293Z INFO Daemon Daemon Run daemon Jan 14 13:21:18.173528 waagent[1843]: 2025-01-14T13:21:18.173395Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:21:18.177502 waagent[1843]: 2025-01-14T13:21:18.177448Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:21:18.180294 waagent[1843]: 2025-01-14T13:21:18.180244Z INFO Daemon Daemon Activate resource disk Jan 14 13:21:18.182519 waagent[1843]: 2025-01-14T13:21:18.182472Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:21:18.190503 waagent[1843]: 2025-01-14T13:21:18.190448Z INFO Daemon Daemon Found device: None Jan 14 13:21:18.192754 waagent[1843]: 2025-01-14T13:21:18.192702Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:21:18.196796 waagent[1843]: 2025-01-14T13:21:18.196745Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:21:18.201164 waagent[1843]: 2025-01-14T13:21:18.201107Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:18.201323 waagent[1843]: 2025-01-14T13:21:18.201278Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:21:18.213005 waagent[1843]: 2025-01-14T13:21:18.212832Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:21:18.219231 waagent[1843]: 2025-01-14T13:21:18.219183Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:21:18.228865 waagent[1843]: 2025-01-14T13:21:18.220276Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:21:18.228865 waagent[1843]: 2025-01-14T13:21:18.221162Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:21:18.332531 waagent[1843]: 2025-01-14T13:21:18.329783Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:21:18.344126 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:21:18.346899 waagent[1843]: 2025-01-14T13:21:18.346829Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:21:18.356631 waagent[1843]: 2025-01-14T13:21:18.347101Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:18.356631 waagent[1843]: 2025-01-14T13:21:18.348037Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:21:18.356631 waagent[1843]: 2025-01-14T13:21:18.348793Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:21:18.356631 waagent[1843]: 2025-01-14T13:21:18.349766Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:21:18.356631 waagent[1843]: 2025-01-14T13:21:18.350514Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:21:18.394236 waagent[1843]: 2025-01-14T13:21:18.394170Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:21:18.401988 waagent[1843]: 2025-01-14T13:21:18.394756Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:21:18.401988 waagent[1843]: 2025-01-14T13:21:18.395347Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:21:18.479376 waagent[1843]: 2025-01-14T13:21:18.479219Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:21:18.482414 waagent[1843]: 2025-01-14T13:21:18.482343Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:21:18.488296 waagent[1843]: 2025-01-14T13:21:18.488237Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:18.503867 waagent[1843]: 2025-01-14T13:21:18.503801Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:21:18.514631 waagent[1843]: 2025-01-14T13:21:18.504504Z INFO Daemon Jan 14 13:21:18.514631 waagent[1843]: 2025-01-14T13:21:18.505368Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2178f11a-217f-429c-8f55-923ada564d70 eTag: 6751738974203826849 source: Fabric] Jan 14 13:21:18.514631 waagent[1843]: 2025-01-14T13:21:18.506858Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:18.514631 waagent[1843]: 2025-01-14T13:21:18.507875Z INFO Daemon Jan 14 13:21:18.514631 waagent[1843]: 2025-01-14T13:21:18.508645Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:18.521313 waagent[1843]: 2025-01-14T13:21:18.521263Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:21:18.582437 waagent[1843]: 2025-01-14T13:21:18.582346Z INFO Daemon Downloaded certificate {'thumbprint': '4EFB95548E9D6F61CB26A81703C8BD64E7C0D9C3', 'hasPrivateKey': True} Jan 14 13:21:18.587136 waagent[1843]: 2025-01-14T13:21:18.587067Z INFO Daemon Fetch goal state completed Jan 14 13:21:18.598813 waagent[1843]: 2025-01-14T13:21:18.598763Z INFO Daemon Daemon Starting provisioning Jan 14 13:21:18.605339 waagent[1843]: 2025-01-14T13:21:18.598997Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:21:18.605339 waagent[1843]: 2025-01-14T13:21:18.599925Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-950c255954] Jan 14 13:21:18.619050 waagent[1843]: 2025-01-14T13:21:18.618990Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-950c255954] Jan 14 13:21:18.626646 waagent[1843]: 2025-01-14T13:21:18.619369Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:21:18.626646 waagent[1843]: 2025-01-14T13:21:18.620275Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:21:18.646602 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:18.646629 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:18.646676 systemd-networkd[1456]: eth0: DHCP lease lost Jan 14 13:21:18.648009 waagent[1843]: 2025-01-14T13:21:18.647941Z INFO Daemon Daemon Create user account if not exists Jan 14 13:21:18.651068 waagent[1843]: 2025-01-14T13:21:18.651007Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:21:18.654253 waagent[1843]: 2025-01-14T13:21:18.654188Z INFO Daemon Daemon Configure sudoer Jan 14 13:21:18.659554 waagent[1843]: 2025-01-14T13:21:18.654694Z INFO Daemon Daemon Configure sshd Jan 14 13:21:18.659554 waagent[1843]: 2025-01-14T13:21:18.655073Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:21:18.659554 waagent[1843]: 2025-01-14T13:21:18.655417Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:21:18.665755 systemd-networkd[1456]: eth0: DHCPv6 lease lost Jan 14 13:21:18.705699 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:19.752459 waagent[1843]: 2025-01-14T13:21:19.752380Z INFO Daemon Daemon Provisioning complete Jan 14 13:21:19.761730 waagent[1843]: 2025-01-14T13:21:19.761676Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:21:19.764585 waagent[1843]: 2025-01-14T13:21:19.761950Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:21:19.764585 waagent[1843]: 2025-01-14T13:21:19.762929Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:21:19.887987 waagent[1923]: 2025-01-14T13:21:19.887877Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:21:19.888414 waagent[1923]: 2025-01-14T13:21:19.888051Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:21:19.888414 waagent[1923]: 2025-01-14T13:21:19.888133Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:21:19.926985 waagent[1923]: 2025-01-14T13:21:19.926888Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:21:19.927207 waagent[1923]: 2025-01-14T13:21:19.927157Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:19.927302 waagent[1923]: 2025-01-14T13:21:19.927258Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:19.935195 waagent[1923]: 2025-01-14T13:21:19.935127Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:19.941050 waagent[1923]: 2025-01-14T13:21:19.940993Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:21:19.941715 waagent[1923]: 2025-01-14T13:21:19.941653Z INFO ExtHandler Jan 14 13:21:19.941826 waagent[1923]: 2025-01-14T13:21:19.941771Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 71acd359-15ce-428f-b66e-cbd5c802da0c eTag: 6751738974203826849 source: Fabric] Jan 14 13:21:19.942157 waagent[1923]: 2025-01-14T13:21:19.942106Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:19.942725 waagent[1923]: 2025-01-14T13:21:19.942673Z INFO ExtHandler Jan 14 13:21:19.942792 waagent[1923]: 2025-01-14T13:21:19.942758Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:19.946333 waagent[1923]: 2025-01-14T13:21:19.946284Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:21:20.008860 waagent[1923]: 2025-01-14T13:21:20.008730Z INFO ExtHandler Downloaded certificate {'thumbprint': '4EFB95548E9D6F61CB26A81703C8BD64E7C0D9C3', 'hasPrivateKey': True} Jan 14 13:21:20.009462 waagent[1923]: 2025-01-14T13:21:20.009403Z INFO ExtHandler Fetch goal state completed Jan 14 13:21:20.021445 waagent[1923]: 2025-01-14T13:21:20.021380Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1923 Jan 14 13:21:20.021596 waagent[1923]: 2025-01-14T13:21:20.021547Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:21:20.023243 waagent[1923]: 2025-01-14T13:21:20.023183Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:21:20.023602 waagent[1923]: 2025-01-14T13:21:20.023552Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:21:20.069074 waagent[1923]: 2025-01-14T13:21:20.069022Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:21:20.069304 waagent[1923]: 2025-01-14T13:21:20.069255Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:21:20.076089 waagent[1923]: 2025-01-14T13:21:20.076041Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:21:20.083125 systemd[1]: Reloading requested from client PID 1936 ('systemctl') (unit waagent.service)... Jan 14 13:21:20.083142 systemd[1]: Reloading... Jan 14 13:21:20.171644 zram_generator::config[1973]: No configuration found. Jan 14 13:21:20.289645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:20.377938 systemd[1]: Reloading finished in 294 ms. Jan 14 13:21:20.408638 waagent[1923]: 2025-01-14T13:21:20.403336Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:21:20.410959 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit waagent.service)... Jan 14 13:21:20.410974 systemd[1]: Reloading... Jan 14 13:21:20.502642 zram_generator::config[2061]: No configuration found. Jan 14 13:21:20.623533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:20.707178 systemd[1]: Reloading finished in 295 ms. Jan 14 13:21:20.734940 waagent[1923]: 2025-01-14T13:21:20.733830Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:21:20.734940 waagent[1923]: 2025-01-14T13:21:20.734045Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:21:21.400949 waagent[1923]: 2025-01-14T13:21:21.400860Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:21:21.401626 waagent[1923]: 2025-01-14T13:21:21.401545Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:21:21.402388 waagent[1923]: 2025-01-14T13:21:21.402330Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:21:21.402511 waagent[1923]: 2025-01-14T13:21:21.402466Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:21.402667 waagent[1923]: 2025-01-14T13:21:21.402603Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:21.402982 waagent[1923]: 2025-01-14T13:21:21.402876Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:21:21.403377 waagent[1923]: 2025-01-14T13:21:21.403330Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:21:21.403680 waagent[1923]: 2025-01-14T13:21:21.403633Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:21:21.403680 waagent[1923]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:21:21.403680 waagent[1923]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:21:21.403680 waagent[1923]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:21:21.403680 waagent[1923]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:21.403680 waagent[1923]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:21.403680 waagent[1923]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:21.403958 waagent[1923]: 2025-01-14T13:21:21.403753Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:21.403958 waagent[1923]: 2025-01-14T13:21:21.403844Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:21.404085 waagent[1923]: 2025-01-14T13:21:21.404020Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:21:21.404284 waagent[1923]: 2025-01-14T13:21:21.404115Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:21:21.404381 waagent[1923]: 2025-01-14T13:21:21.404313Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:21:21.404858 waagent[1923]: 2025-01-14T13:21:21.404812Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:21:21.405075 waagent[1923]: 2025-01-14T13:21:21.405026Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:21:21.405482 waagent[1923]: 2025-01-14T13:21:21.405410Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:21:21.405694 waagent[1923]: 2025-01-14T13:21:21.405486Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:21:21.406084 waagent[1923]: 2025-01-14T13:21:21.406041Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:21:21.412002 waagent[1923]: 2025-01-14T13:21:21.411961Z INFO ExtHandler ExtHandler Jan 14 13:21:21.412107 waagent[1923]: 2025-01-14T13:21:21.412062Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 23a48f18-e82b-47e5-b339-cdf6a5b19975 correlation 0732f454-6316-4dbe-a15f-2341fb331de8 created: 2025-01-14T13:20:08.360718Z] Jan 14 13:21:21.413006 waagent[1923]: 2025-01-14T13:21:21.412965Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:21:21.413594 waagent[1923]: 2025-01-14T13:21:21.413548Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:21:21.443092 waagent[1923]: 2025-01-14T13:21:21.443034Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 029424FD-08C4-49AB-B6CC-AE79D8FF5906;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:21:21.449691 waagent[1923]: 2025-01-14T13:21:21.449631Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:21:21.449691 waagent[1923]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:21:21.449691 waagent[1923]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:21:21.449691 waagent[1923]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2f:37:f7 brd ff:ff:ff:ff:ff:ff Jan 14 13:21:21.449691 waagent[1923]: 3: enP23632s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2f:37:f7 brd ff:ff:ff:ff:ff:ff\ altname enP23632p0s2 Jan 14 13:21:21.449691 waagent[1923]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:21:21.449691 waagent[1923]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:21:21.449691 waagent[1923]: 2: eth0 inet 10.200.4.31/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:21:21.449691 waagent[1923]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:21:21.449691 waagent[1923]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:21:21.449691 waagent[1923]: 2: eth0 inet6 fe80::7e1e:52ff:fe2f:37f7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:21.449691 waagent[1923]: 3: enP23632s1 inet6 fe80::7e1e:52ff:fe2f:37f7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:21.537308 waagent[1923]: 2025-01-14T13:21:21.537240Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:21:21.537308 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.537308 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.537308 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.537308 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.537308 waagent[1923]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.537308 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.537308 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:21.537308 waagent[1923]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:21.537308 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:21.540712 waagent[1923]: 2025-01-14T13:21:21.540552Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:21:21.540712 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.540712 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.540712 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.540712 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.540712 waagent[1923]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:21.540712 waagent[1923]: pkts bytes target prot opt in out source destination Jan 14 13:21:21.540712 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:21.540712 waagent[1923]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:21.540712 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:21.541092 waagent[1923]: 2025-01-14T13:21:21.540908Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:21:27.731746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:21:27.738846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:27.830690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:27.835681 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:28.391186 kubelet[2157]: E0114 13:21:28.391102 2157 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:28.395367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:28.395561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:38.527800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:21:38.533821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:38.674160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:38.685978 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:39.003176 chronyd[1701]: Selected source PHC0 Jan 14 13:21:39.140813 kubelet[2173]: E0114 13:21:39.140749 2173 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:39.143531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:39.143737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:45.737932 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:21:45.739308 systemd[1]: Started sshd@0-10.200.4.31:22-10.200.16.10:51934.service - OpenSSH per-connection server daemon (10.200.16.10:51934). Jan 14 13:21:46.447856 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 51934 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:46.449441 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:46.453323 systemd-logind[1698]: New session 3 of user core. Jan 14 13:21:46.462755 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:21:46.981743 systemd[1]: Started sshd@1-10.200.4.31:22-10.200.16.10:34912.service - OpenSSH per-connection server daemon (10.200.16.10:34912). Jan 14 13:21:47.591563 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 34912 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:47.593123 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:47.597785 systemd-logind[1698]: New session 4 of user core. Jan 14 13:21:47.606817 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:21:48.022422 sshd[2189]: Connection closed by 10.200.16.10 port 34912 Jan 14 13:21:48.023467 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:48.026150 systemd[1]: sshd@1-10.200.4.31:22-10.200.16.10:34912.service: Deactivated successfully. Jan 14 13:21:48.028049 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:21:48.029494 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:21:48.030421 systemd-logind[1698]: Removed session 4. Jan 14 13:21:48.129370 systemd[1]: Started sshd@2-10.200.4.31:22-10.200.16.10:34920.service - OpenSSH per-connection server daemon (10.200.16.10:34920). Jan 14 13:21:48.735493 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 34920 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:48.736928 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:48.741909 systemd-logind[1698]: New session 5 of user core. Jan 14 13:21:48.751105 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:21:49.164799 sshd[2196]: Connection closed by 10.200.16.10 port 34920 Jan 14 13:21:49.165550 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:49.168221 systemd[1]: sshd@2-10.200.4.31:22-10.200.16.10:34920.service: Deactivated successfully. Jan 14 13:21:49.170075 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:21:49.171023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:21:49.172296 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:21:49.177833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:49.178918 systemd-logind[1698]: Removed session 5. Jan 14 13:21:49.272473 systemd[1]: Started sshd@3-10.200.4.31:22-10.200.16.10:34922.service - OpenSSH per-connection server daemon (10.200.16.10:34922). Jan 14 13:21:49.520921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:49.528934 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:49.800922 kubelet[2211]: E0114 13:21:49.800822 2211 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:49.803663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:49.803854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:49.883037 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 34922 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:49.884684 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:49.890264 systemd-logind[1698]: New session 6 of user core. Jan 14 13:21:49.895766 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:21:50.314278 sshd[2219]: Connection closed by 10.200.16.10 port 34922 Jan 14 13:21:50.315208 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:50.319442 systemd[1]: sshd@3-10.200.4.31:22-10.200.16.10:34922.service: Deactivated successfully. Jan 14 13:21:50.321595 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:21:50.322310 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:21:50.323291 systemd-logind[1698]: Removed session 6. Jan 14 13:21:50.420415 systemd[1]: Started sshd@4-10.200.4.31:22-10.200.16.10:34926.service - OpenSSH per-connection server daemon (10.200.16.10:34926). Jan 14 13:21:51.026697 sshd[2224]: Accepted publickey for core from 10.200.16.10 port 34926 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:51.028304 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:51.033807 systemd-logind[1698]: New session 7 of user core. Jan 14 13:21:51.039769 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:21:51.529519 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:21:51.529914 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:21:51.562051 sudo[2227]: pam_unix(sudo:session): session closed for user root Jan 14 13:21:51.662332 sshd[2226]: Connection closed by 10.200.16.10 port 34926 Jan 14 13:21:51.663502 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:51.666465 systemd[1]: sshd@4-10.200.4.31:22-10.200.16.10:34926.service: Deactivated successfully. Jan 14 13:21:51.668390 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:21:51.669980 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:21:51.671090 systemd-logind[1698]: Removed session 7. Jan 14 13:21:51.777914 systemd[1]: Started sshd@5-10.200.4.31:22-10.200.16.10:34930.service - OpenSSH per-connection server daemon (10.200.16.10:34930). Jan 14 13:21:52.382569 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 34930 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:52.384242 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:52.389896 systemd-logind[1698]: New session 8 of user core. Jan 14 13:21:52.392981 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:21:52.720084 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:21:52.720448 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:21:52.724582 sudo[2236]: pam_unix(sudo:session): session closed for user root Jan 14 13:21:52.729770 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:21:52.730106 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:21:52.749011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:21:52.778433 augenrules[2258]: No rules Jan 14 13:21:52.779841 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:21:52.780081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:21:52.781249 sudo[2235]: pam_unix(sudo:session): session closed for user root Jan 14 13:21:52.879883 sshd[2234]: Connection closed by 10.200.16.10 port 34930 Jan 14 13:21:52.880595 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:52.883652 systemd[1]: sshd@5-10.200.4.31:22-10.200.16.10:34930.service: Deactivated successfully. Jan 14 13:21:52.885806 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:21:52.887574 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:21:52.888628 systemd-logind[1698]: Removed session 8. Jan 14 13:21:52.992887 systemd[1]: Started sshd@6-10.200.4.31:22-10.200.16.10:34934.service - OpenSSH per-connection server daemon (10.200.16.10:34934). Jan 14 13:21:53.599159 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 34934 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:53.600641 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:53.605828 systemd-logind[1698]: New session 9 of user core. Jan 14 13:21:53.614773 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:21:53.935078 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:21:53.935437 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:21:55.163241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:55.168894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:55.197198 systemd[1]: Reloading requested from client PID 2307 ('systemctl') (unit session-9.scope)... Jan 14 13:21:55.197374 systemd[1]: Reloading... Jan 14 13:21:55.297645 zram_generator::config[2346]: No configuration found. Jan 14 13:21:55.442146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:55.532195 systemd[1]: Reloading finished in 334 ms. Jan 14 13:21:55.583067 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:21:55.583136 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:21:55.583409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:55.589114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:55.798456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:55.804355 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:21:56.442023 kubelet[2416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:21:56.442023 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:21:56.442023 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:21:56.442503 kubelet[2416]: I0114 13:21:56.442084 2416 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:21:56.839819 kubelet[2416]: I0114 13:21:56.839782 2416 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:21:56.839819 kubelet[2416]: I0114 13:21:56.839810 2416 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:21:56.840088 kubelet[2416]: I0114 13:21:56.840061 2416 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:21:56.859698 kubelet[2416]: I0114 13:21:56.859241 2416 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:21:56.874863 kubelet[2416]: I0114 13:21:56.874830 2416 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:21:56.875104 kubelet[2416]: I0114 13:21:56.875086 2416 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:21:56.875285 kubelet[2416]: I0114 13:21:56.875265 2416 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:21:56.875437 kubelet[2416]: I0114 13:21:56.875296 2416 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:21:56.875437 kubelet[2416]: I0114 13:21:56.875310 2416 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:21:56.875437 kubelet[2416]: I0114 13:21:56.875431 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:21:56.875557 kubelet[2416]: I0114 13:21:56.875537 2416 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:21:56.875557 kubelet[2416]: I0114 13:21:56.875553 2416 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:21:56.875775 kubelet[2416]: I0114 13:21:56.875585 2416 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:21:56.875775 kubelet[2416]: I0114 13:21:56.875604 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:21:56.876992 kubelet[2416]: E0114 13:21:56.876715 2416 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:21:56.876992 kubelet[2416]: E0114 13:21:56.876958 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:21:56.877214 kubelet[2416]: I0114 13:21:56.877191 2416 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:21:56.880123 kubelet[2416]: I0114 13:21:56.880101 2416 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:21:56.880214 kubelet[2416]: W0114 13:21:56.880168 2416 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:21:56.880799 kubelet[2416]: I0114 13:21:56.880717 2416 server.go:1256] "Started kubelet" Jan 14 13:21:56.882344 kubelet[2416]: I0114 13:21:56.882213 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:21:56.889396 kubelet[2416]: I0114 13:21:56.889377 2416 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:21:56.890678 kubelet[2416]: I0114 13:21:56.890488 2416 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:21:56.891198 kubelet[2416]: E0114 13:21:56.891173 2416 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.4.31.181a91cd36eedbd4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.4.31,UID:10.200.4.31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.4.31,},FirstTimestamp:2025-01-14 13:21:56.880694228 +0000 UTC m=+1.072366538,LastTimestamp:2025-01-14 13:21:56.880694228 +0000 UTC m=+1.072366538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.4.31,}" Jan 14 13:21:56.891317 kubelet[2416]: W0114 13:21:56.891281 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:21:56.891317 kubelet[2416]: E0114 13:21:56.891314 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:21:56.892985 kubelet[2416]: W0114 13:21:56.891426 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.4.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:21:56.892985 kubelet[2416]: E0114 13:21:56.891449 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.4.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:21:56.892985 kubelet[2416]: I0114 13:21:56.892195 2416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:21:56.892985 kubelet[2416]: I0114 13:21:56.892375 2416 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:21:56.893754 kubelet[2416]: I0114 13:21:56.893734 2416 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:21:56.898707 kubelet[2416]: I0114 13:21:56.898427 2416 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:21:56.899488 kubelet[2416]: I0114 13:21:56.899462 2416 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:21:56.899594 kubelet[2416]: I0114 13:21:56.899572 2416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:21:56.899966 kubelet[2416]: I0114 13:21:56.899943 2416 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:21:56.900653 kubelet[2416]: E0114 13:21:56.900631 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.31\" not found" node="10.200.4.31" Jan 14 13:21:56.901516 kubelet[2416]: E0114 13:21:56.901494 2416 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:21:56.902667 kubelet[2416]: I0114 13:21:56.902604 2416 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:21:56.927831 kubelet[2416]: I0114 13:21:56.927555 2416 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:21:56.927831 kubelet[2416]: I0114 13:21:56.927573 2416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:21:56.927831 kubelet[2416]: I0114 13:21:56.927596 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:21:56.933461 kubelet[2416]: I0114 13:21:56.933435 2416 policy_none.go:49] "None policy: Start" Jan 14 13:21:56.934252 kubelet[2416]: I0114 13:21:56.933986 2416 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:21:56.934252 kubelet[2416]: I0114 13:21:56.934006 2416 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:21:56.942538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:21:56.950444 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:21:56.953690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:21:56.960746 kubelet[2416]: I0114 13:21:56.960719 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:21:56.962674 kubelet[2416]: I0114 13:21:56.962495 2416 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:21:56.964229 kubelet[2416]: I0114 13:21:56.964211 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:21:56.965620 kubelet[2416]: I0114 13:21:56.965588 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:21:56.966130 kubelet[2416]: I0114 13:21:56.965747 2416 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:21:56.966130 kubelet[2416]: I0114 13:21:56.965773 2416 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:21:56.966130 kubelet[2416]: E0114 13:21:56.965880 2416 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 14 13:21:56.968887 kubelet[2416]: E0114 13:21:56.968870 2416 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.31\" not found" Jan 14 13:21:56.995387 kubelet[2416]: I0114 13:21:56.995363 2416 kubelet_node_status.go:73] "Attempting to register node" node="10.200.4.31" Jan 14 13:21:57.001573 kubelet[2416]: I0114 13:21:57.001544 2416 kubelet_node_status.go:76] "Successfully registered node" node="10.200.4.31" Jan 14 13:21:57.029681 kubelet[2416]: E0114 13:21:57.029646 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.130531 kubelet[2416]: E0114 13:21:57.130373 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.230992 kubelet[2416]: E0114 13:21:57.230942 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.331573 kubelet[2416]: E0114 13:21:57.331519 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.432475 kubelet[2416]: E0114 13:21:57.432337 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.533025 kubelet[2416]: E0114 13:21:57.532971 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.633730 kubelet[2416]: E0114 13:21:57.633671 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.734560 kubelet[2416]: E0114 13:21:57.734420 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.31\" not found" Jan 14 13:21:57.836414 kubelet[2416]: I0114 13:21:57.836368 2416 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 14 13:21:57.836867 containerd[1714]: time="2025-01-14T13:21:57.836835140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:21:57.837283 kubelet[2416]: I0114 13:21:57.837099 2416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 14 13:21:57.841844 kubelet[2416]: I0114 13:21:57.841821 2416 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 14 13:21:57.842040 kubelet[2416]: W0114 13:21:57.841997 2416 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:21:57.842040 kubelet[2416]: W0114 13:21:57.842006 2416 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:21:57.842040 kubelet[2416]: W0114 13:21:57.842027 2416 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:21:57.877369 kubelet[2416]: E0114 13:21:57.877298 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:21:57.877369 kubelet[2416]: I0114 13:21:57.877313 2416 apiserver.go:52] "Watching apiserver" Jan 14 13:21:57.887149 kubelet[2416]: I0114 13:21:57.887110 2416 topology_manager.go:215] "Topology Admit Handler" podUID="95fbb7a6-a96c-48a4-b895-b75cb6b713c5" podNamespace="calico-system" podName="calico-node-2kw87" Jan 14 13:21:57.887300 kubelet[2416]: I0114 13:21:57.887251 2416 topology_manager.go:215] "Topology Admit Handler" podUID="cb9275d6-51d8-4705-9177-49dadf876371" podNamespace="calico-system" podName="csi-node-driver-hl8qh" Jan 14 13:21:57.887372 kubelet[2416]: I0114 13:21:57.887321 2416 topology_manager.go:215] "Topology Admit Handler" podUID="623a9059-5f42-45fd-b644-2a07320949b5" podNamespace="kube-system" podName="kube-proxy-mg4dl" Jan 14 13:21:57.888057 kubelet[2416]: E0114 13:21:57.887637 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:21:57.896407 systemd[1]: Created slice kubepods-besteffort-pod623a9059_5f42_45fd_b644_2a07320949b5.slice - libcontainer container kubepods-besteffort-pod623a9059_5f42_45fd_b644_2a07320949b5.slice. Jan 14 13:21:57.901263 kubelet[2416]: I0114 13:21:57.900649 2416 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:21:57.907048 kubelet[2416]: I0114 13:21:57.907026 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/623a9059-5f42-45fd-b644-2a07320949b5-xtables-lock\") pod \"kube-proxy-mg4dl\" (UID: \"623a9059-5f42-45fd-b644-2a07320949b5\") " pod="kube-system/kube-proxy-mg4dl" Jan 14 13:21:57.907316 kubelet[2416]: I0114 13:21:57.907063 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-cni-log-dir\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907316 kubelet[2416]: I0114 13:21:57.907103 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-node-certs\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907316 kubelet[2416]: I0114 13:21:57.907153 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-var-run-calico\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907316 kubelet[2416]: I0114 13:21:57.907216 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb9275d6-51d8-4705-9177-49dadf876371-kubelet-dir\") pod \"csi-node-driver-hl8qh\" (UID: \"cb9275d6-51d8-4705-9177-49dadf876371\") " pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:21:57.907316 kubelet[2416]: I0114 13:21:57.907289 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb9275d6-51d8-4705-9177-49dadf876371-socket-dir\") pod \"csi-node-driver-hl8qh\" (UID: \"cb9275d6-51d8-4705-9177-49dadf876371\") " pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:21:57.907530 kubelet[2416]: I0114 13:21:57.907336 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb9275d6-51d8-4705-9177-49dadf876371-registration-dir\") pod \"csi-node-driver-hl8qh\" (UID: \"cb9275d6-51d8-4705-9177-49dadf876371\") " pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:21:57.907530 kubelet[2416]: I0114 13:21:57.907377 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/623a9059-5f42-45fd-b644-2a07320949b5-lib-modules\") pod \"kube-proxy-mg4dl\" (UID: \"623a9059-5f42-45fd-b644-2a07320949b5\") " pod="kube-system/kube-proxy-mg4dl" Jan 14 13:21:57.907530 kubelet[2416]: I0114 13:21:57.907408 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjgb\" (UniqueName: \"kubernetes.io/projected/623a9059-5f42-45fd-b644-2a07320949b5-kube-api-access-tfjgb\") pod \"kube-proxy-mg4dl\" (UID: \"623a9059-5f42-45fd-b644-2a07320949b5\") " pod="kube-system/kube-proxy-mg4dl" Jan 14 13:21:57.907530 kubelet[2416]: I0114 13:21:57.907434 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-xtables-lock\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907530 kubelet[2416]: I0114 13:21:57.907462 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsr85\" (UniqueName: \"kubernetes.io/projected/cb9275d6-51d8-4705-9177-49dadf876371-kube-api-access-jsr85\") pod \"csi-node-driver-hl8qh\" (UID: \"cb9275d6-51d8-4705-9177-49dadf876371\") " pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:21:57.907747 kubelet[2416]: I0114 13:21:57.907489 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-flexvol-driver-host\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907747 kubelet[2416]: I0114 13:21:57.907519 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdb7g\" (UniqueName: \"kubernetes.io/projected/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-kube-api-access-tdb7g\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.907747 kubelet[2416]: I0114 13:21:57.907558 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/623a9059-5f42-45fd-b644-2a07320949b5-kube-proxy\") pod \"kube-proxy-mg4dl\" (UID: \"623a9059-5f42-45fd-b644-2a07320949b5\") " pod="kube-system/kube-proxy-mg4dl" Jan 14 13:21:57.907747 kubelet[2416]: I0114 13:21:57.907593 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-lib-modules\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.908760 kubelet[2416]: I0114 13:21:57.908216 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-cni-bin-dir\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.908760 kubelet[2416]: I0114 13:21:57.908375 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-cni-net-dir\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.908760 kubelet[2416]: I0114 13:21:57.908409 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cb9275d6-51d8-4705-9177-49dadf876371-varrun\") pod \"csi-node-driver-hl8qh\" (UID: \"cb9275d6-51d8-4705-9177-49dadf876371\") " pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:21:57.908760 kubelet[2416]: I0114 13:21:57.908451 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-policysync\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.908760 kubelet[2416]: I0114 13:21:57.908479 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-tigera-ca-bundle\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.908991 kubelet[2416]: I0114 13:21:57.908532 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95fbb7a6-a96c-48a4-b895-b75cb6b713c5-var-lib-calico\") pod \"calico-node-2kw87\" (UID: \"95fbb7a6-a96c-48a4-b895-b75cb6b713c5\") " pod="calico-system/calico-node-2kw87" Jan 14 13:21:57.912824 systemd[1]: Created slice kubepods-besteffort-pod95fbb7a6_a96c_48a4_b895_b75cb6b713c5.slice - libcontainer container kubepods-besteffort-pod95fbb7a6_a96c_48a4_b895_b75cb6b713c5.slice. Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.014827 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.017700 kubelet[2416]: W0114 13:21:58.014864 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.014888 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.015208 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.017700 kubelet[2416]: W0114 13:21:58.015235 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.015255 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.015469 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.017700 kubelet[2416]: W0114 13:21:58.015481 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.015497 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.017700 kubelet[2416]: E0114 13:21:58.016897 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.018172 kubelet[2416]: W0114 13:21:58.016918 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.018172 kubelet[2416]: E0114 13:21:58.016936 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.018460 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.027138 kubelet[2416]: W0114 13:21:58.018570 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.018595 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.019005 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.027138 kubelet[2416]: W0114 13:21:58.019018 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.019035 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.019325 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.027138 kubelet[2416]: W0114 13:21:58.019859 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.027138 kubelet[2416]: E0114 13:21:58.019876 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.034817 kubelet[2416]: E0114 13:21:58.034794 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.034958 kubelet[2416]: W0114 13:21:58.034941 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.036702 kubelet[2416]: E0114 13:21:58.036682 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.036867 kubelet[2416]: W0114 13:21:58.036853 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.039760 kubelet[2416]: E0114 13:21:58.039744 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.039864 kubelet[2416]: W0114 13:21:58.039851 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.040563 kubelet[2416]: E0114 13:21:58.040543 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.040825 kubelet[2416]: E0114 13:21:58.040812 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.040970 kubelet[2416]: E0114 13:21:58.040959 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.041532 kubelet[2416]: E0114 13:21:58.041517 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.041652 kubelet[2416]: W0114 13:21:58.041636 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.041926 kubelet[2416]: E0114 13:21:58.041913 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.042012 kubelet[2416]: W0114 13:21:58.041999 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.042269 kubelet[2416]: E0114 13:21:58.042249 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.042343 kubelet[2416]: W0114 13:21:58.042332 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.042589 kubelet[2416]: E0114 13:21:58.042576 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.042698 kubelet[2416]: W0114 13:21:58.042679 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.042946 kubelet[2416]: E0114 13:21:58.042932 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.043036 kubelet[2416]: W0114 13:21:58.043021 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.043109 kubelet[2416]: E0114 13:21:58.043098 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.043604 kubelet[2416]: E0114 13:21:58.043580 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.043710 kubelet[2416]: E0114 13:21:58.043640 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.043710 kubelet[2416]: E0114 13:21:58.043705 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.043795 kubelet[2416]: E0114 13:21:58.043734 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.043894 kubelet[2416]: E0114 13:21:58.043883 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.043955 kubelet[2416]: W0114 13:21:58.043945 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.044026 kubelet[2416]: E0114 13:21:58.044018 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.044563 kubelet[2416]: E0114 13:21:58.044542 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.044680 kubelet[2416]: W0114 13:21:58.044668 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.044765 kubelet[2416]: E0114 13:21:58.044757 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.045067 kubelet[2416]: E0114 13:21:58.045042 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.045161 kubelet[2416]: W0114 13:21:58.045151 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.045245 kubelet[2416]: E0114 13:21:58.045237 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.045546 kubelet[2416]: E0114 13:21:58.045520 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.045765 kubelet[2416]: W0114 13:21:58.045690 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.045765 kubelet[2416]: E0114 13:21:58.045716 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.046187 kubelet[2416]: E0114 13:21:58.046174 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.046287 kubelet[2416]: W0114 13:21:58.046275 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.046439 kubelet[2416]: E0114 13:21:58.046338 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.046879 kubelet[2416]: E0114 13:21:58.046865 2416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:21:58.047063 kubelet[2416]: W0114 13:21:58.046969 2416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:21:58.047317 kubelet[2416]: E0114 13:21:58.047116 2416 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:21:58.210810 containerd[1714]: time="2025-01-14T13:21:58.210763411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mg4dl,Uid:623a9059-5f42-45fd-b644-2a07320949b5,Namespace:kube-system,Attempt:0,}" Jan 14 13:21:58.216519 containerd[1714]: time="2025-01-14T13:21:58.216481601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kw87,Uid:95fbb7a6-a96c-48a4-b895-b75cb6b713c5,Namespace:calico-system,Attempt:0,}" Jan 14 13:21:58.323118 sudo[2269]: pam_unix(sudo:session): session closed for user root Jan 14 13:21:58.429631 sshd[2268]: Connection closed by 10.200.16.10 port 34934 Jan 14 13:21:58.430364 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:58.433639 systemd[1]: sshd@6-10.200.4.31:22-10.200.16.10:34934.service: Deactivated successfully. Jan 14 13:21:58.435833 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:21:58.437266 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:21:58.438337 systemd-logind[1698]: Removed session 9. Jan 14 13:21:58.735131 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:21:58.877923 kubelet[2416]: E0114 13:21:58.877840 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:21:58.955625 containerd[1714]: time="2025-01-14T13:21:58.955559705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:21:58.960820 containerd[1714]: time="2025-01-14T13:21:58.960767787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:21:58.963868 containerd[1714]: time="2025-01-14T13:21:58.963829635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:21:58.969070 containerd[1714]: time="2025-01-14T13:21:58.967119787Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:21:58.969179 kubelet[2416]: E0114 13:21:58.968630 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:21:58.969968 containerd[1714]: time="2025-01-14T13:21:58.969896731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:21:58.972918 containerd[1714]: time="2025-01-14T13:21:58.972883177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:21:58.974092 containerd[1714]: time="2025-01-14T13:21:58.974061296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.178183ms" Jan 14 13:21:58.975838 containerd[1714]: time="2025-01-14T13:21:58.975802223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 759.20172ms" Jan 14 13:21:59.020022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276780617.mount: Deactivated successfully. Jan 14 13:21:59.807260 containerd[1714]: time="2025-01-14T13:21:59.804104202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:21:59.808051 containerd[1714]: time="2025-01-14T13:21:59.807352329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:21:59.808051 containerd[1714]: time="2025-01-14T13:21:59.807386630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:59.808051 containerd[1714]: time="2025-01-14T13:21:59.807491531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:59.808369 containerd[1714]: time="2025-01-14T13:21:59.808289537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:21:59.808369 containerd[1714]: time="2025-01-14T13:21:59.808339338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:21:59.808630 containerd[1714]: time="2025-01-14T13:21:59.808355838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:59.808630 containerd[1714]: time="2025-01-14T13:21:59.808446639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:59.878391 kubelet[2416]: E0114 13:21:59.878309 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:00.246798 systemd[1]: Started cri-containerd-64968e8dffee3b83593ce17df8cd73e0da28fcd76e6ec9892727c2847bb25919.scope - libcontainer container 64968e8dffee3b83593ce17df8cd73e0da28fcd76e6ec9892727c2847bb25919. Jan 14 13:22:00.248330 systemd[1]: Started cri-containerd-de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67.scope - libcontainer container de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67. Jan 14 13:22:00.284208 containerd[1714]: time="2025-01-14T13:22:00.284152395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kw87,Uid:95fbb7a6-a96c-48a4-b895-b75cb6b713c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\"" Jan 14 13:22:00.286835 containerd[1714]: time="2025-01-14T13:22:00.286799217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 14 13:22:00.289023 containerd[1714]: time="2025-01-14T13:22:00.288826034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mg4dl,Uid:623a9059-5f42-45fd-b644-2a07320949b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"64968e8dffee3b83593ce17df8cd73e0da28fcd76e6ec9892727c2847bb25919\"" Jan 14 13:22:00.773803 update_engine[1700]: I20250114 13:22:00.773694 1700 update_attempter.cc:509] Updating boot flags... Jan 14 13:22:00.831666 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2576) Jan 14 13:22:00.878722 kubelet[2416]: E0114 13:22:00.878666 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:00.953661 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2576) Jan 14 13:22:00.966945 kubelet[2416]: E0114 13:22:00.966913 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:01.112768 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2576) Jan 14 13:22:01.443652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750363554.mount: Deactivated successfully. Jan 14 13:22:01.576074 containerd[1714]: time="2025-01-14T13:22:01.576014139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:01.578095 containerd[1714]: time="2025-01-14T13:22:01.577992355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 14 13:22:01.582158 containerd[1714]: time="2025-01-14T13:22:01.582096990Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:01.585412 containerd[1714]: time="2025-01-14T13:22:01.585357917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:01.586584 containerd[1714]: time="2025-01-14T13:22:01.585963122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.299121405s" Jan 14 13:22:01.586584 containerd[1714]: time="2025-01-14T13:22:01.586000222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 14 13:22:01.587284 containerd[1714]: time="2025-01-14T13:22:01.587180532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 13:22:01.588178 containerd[1714]: time="2025-01-14T13:22:01.588150240Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:22:01.625548 containerd[1714]: time="2025-01-14T13:22:01.625507651Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d\"" Jan 14 13:22:01.626235 containerd[1714]: time="2025-01-14T13:22:01.626188556Z" level=info msg="StartContainer for \"d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d\"" Jan 14 13:22:01.659768 systemd[1]: Started cri-containerd-d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d.scope - libcontainer container d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d. Jan 14 13:22:01.690853 containerd[1714]: time="2025-01-14T13:22:01.690811494Z" level=info msg="StartContainer for \"d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d\" returns successfully" Jan 14 13:22:01.698178 systemd[1]: cri-containerd-d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d.scope: Deactivated successfully. Jan 14 13:22:01.879933 kubelet[2416]: E0114 13:22:01.879855 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:01.900260 containerd[1714]: time="2025-01-14T13:22:01.900193935Z" level=info msg="shim disconnected" id=d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d namespace=k8s.io Jan 14 13:22:01.900627 containerd[1714]: time="2025-01-14T13:22:01.900473337Z" level=warning msg="cleaning up after shim disconnected" id=d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d namespace=k8s.io Jan 14 13:22:01.900627 containerd[1714]: time="2025-01-14T13:22:01.900514938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:02.410506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d93c0fd734e706e2af2ae50eb060090cd7e30578a9e9c13362f857fe0479b78d-rootfs.mount: Deactivated successfully. Jan 14 13:22:02.880380 kubelet[2416]: E0114 13:22:02.880320 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:02.969649 kubelet[2416]: E0114 13:22:02.968102 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:02.992068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4122903012.mount: Deactivated successfully. Jan 14 13:22:03.493218 containerd[1714]: time="2025-01-14T13:22:03.493166411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:03.495544 containerd[1714]: time="2025-01-14T13:22:03.495487527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 13:22:03.498467 containerd[1714]: time="2025-01-14T13:22:03.498413946Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:03.501720 containerd[1714]: time="2025-01-14T13:22:03.501669267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:03.502716 containerd[1714]: time="2025-01-14T13:22:03.502233171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.915018639s" Jan 14 13:22:03.502716 containerd[1714]: time="2025-01-14T13:22:03.502269871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 13:22:03.503182 containerd[1714]: time="2025-01-14T13:22:03.503157177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 14 13:22:03.504149 containerd[1714]: time="2025-01-14T13:22:03.504122684Z" level=info msg="CreateContainer within sandbox \"64968e8dffee3b83593ce17df8cd73e0da28fcd76e6ec9892727c2847bb25919\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:22:03.543405 containerd[1714]: time="2025-01-14T13:22:03.543353044Z" level=info msg="CreateContainer within sandbox \"64968e8dffee3b83593ce17df8cd73e0da28fcd76e6ec9892727c2847bb25919\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41c06e4dbf75a48c90c1aabbd9ebb4f440933ae96685cf63a184b7675002c625\"" Jan 14 13:22:03.544152 containerd[1714]: time="2025-01-14T13:22:03.544112049Z" level=info msg="StartContainer for \"41c06e4dbf75a48c90c1aabbd9ebb4f440933ae96685cf63a184b7675002c625\"" Jan 14 13:22:03.575773 systemd[1]: Started cri-containerd-41c06e4dbf75a48c90c1aabbd9ebb4f440933ae96685cf63a184b7675002c625.scope - libcontainer container 41c06e4dbf75a48c90c1aabbd9ebb4f440933ae96685cf63a184b7675002c625. Jan 14 13:22:03.607483 containerd[1714]: time="2025-01-14T13:22:03.606993065Z" level=info msg="StartContainer for \"41c06e4dbf75a48c90c1aabbd9ebb4f440933ae96685cf63a184b7675002c625\" returns successfully" Jan 14 13:22:03.881073 kubelet[2416]: E0114 13:22:03.881007 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:04.007744 kubelet[2416]: I0114 13:22:04.007708 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mg4dl" podStartSLOduration=3.795257091 podStartE2EDuration="7.00766212s" podCreationTimestamp="2025-01-14 13:21:57 +0000 UTC" firstStartedPulling="2025-01-14 13:22:00.290234545 +0000 UTC m=+4.481906755" lastFinishedPulling="2025-01-14 13:22:03.502639474 +0000 UTC m=+7.694311784" observedRunningTime="2025-01-14 13:22:04.00760502 +0000 UTC m=+8.199277230" watchObservedRunningTime="2025-01-14 13:22:04.00766212 +0000 UTC m=+8.199334330" Jan 14 13:22:04.882059 kubelet[2416]: E0114 13:22:04.882000 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:04.967115 kubelet[2416]: E0114 13:22:04.966593 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:05.882219 kubelet[2416]: E0114 13:22:05.882167 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:06.882378 kubelet[2416]: E0114 13:22:06.882285 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:06.967416 kubelet[2416]: E0114 13:22:06.966692 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:07.727722 containerd[1714]: time="2025-01-14T13:22:07.727669671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:07.729405 containerd[1714]: time="2025-01-14T13:22:07.729345982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 14 13:22:07.732260 containerd[1714]: time="2025-01-14T13:22:07.732222301Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:07.736483 containerd[1714]: time="2025-01-14T13:22:07.736297628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:07.737010 containerd[1714]: time="2025-01-14T13:22:07.736976233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.233622154s" Jan 14 13:22:07.737102 containerd[1714]: time="2025-01-14T13:22:07.737014733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 14 13:22:07.739102 containerd[1714]: time="2025-01-14T13:22:07.739066246Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:22:07.770520 containerd[1714]: time="2025-01-14T13:22:07.770477655Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9\"" Jan 14 13:22:07.771001 containerd[1714]: time="2025-01-14T13:22:07.770953258Z" level=info msg="StartContainer for \"ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9\"" Jan 14 13:22:07.804773 systemd[1]: Started cri-containerd-ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9.scope - libcontainer container ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9. Jan 14 13:22:07.834831 containerd[1714]: time="2025-01-14T13:22:07.834785881Z" level=info msg="StartContainer for \"ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9\" returns successfully" Jan 14 13:22:07.883894 kubelet[2416]: E0114 13:22:07.883298 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:08.883588 kubelet[2416]: E0114 13:22:08.883530 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:08.967570 kubelet[2416]: E0114 13:22:08.967130 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:09.211873 containerd[1714]: time="2025-01-14T13:22:09.211716105Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:22:09.214334 kubelet[2416]: I0114 13:22:09.214278 2416 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:22:09.215136 systemd[1]: cri-containerd-ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9.scope: Deactivated successfully. Jan 14 13:22:09.237908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9-rootfs.mount: Deactivated successfully. Jan 14 13:22:09.884252 kubelet[2416]: E0114 13:22:09.884198 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:10.884678 kubelet[2416]: E0114 13:22:10.884628 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:10.972021 systemd[1]: Created slice kubepods-besteffort-podcb9275d6_51d8_4705_9177_49dadf876371.slice - libcontainer container kubepods-besteffort-podcb9275d6_51d8_4705_9177_49dadf876371.slice. Jan 14 13:22:10.974422 containerd[1714]: time="2025-01-14T13:22:10.974386185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:0,}" Jan 14 13:22:11.578997 containerd[1714]: time="2025-01-14T13:22:11.578778247Z" level=info msg="shim disconnected" id=ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9 namespace=k8s.io Jan 14 13:22:11.578997 containerd[1714]: time="2025-01-14T13:22:11.578841848Z" level=warning msg="cleaning up after shim disconnected" id=ead8b98842c753b993f583531390699c15a8504117cb6d8bfdee33ea069457a9 namespace=k8s.io Jan 14 13:22:11.578997 containerd[1714]: time="2025-01-14T13:22:11.578853648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:11.604349 containerd[1714]: time="2025-01-14T13:22:11.603273410Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:22:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:22:11.641261 containerd[1714]: time="2025-01-14T13:22:11.641212616Z" level=error msg="Failed to destroy network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:11.643636 containerd[1714]: time="2025-01-14T13:22:11.641560320Z" level=error msg="encountered an error cleaning up failed sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:11.643636 containerd[1714]: time="2025-01-14T13:22:11.641656121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:11.643299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f-shm.mount: Deactivated successfully. Jan 14 13:22:11.644105 kubelet[2416]: E0114 13:22:11.644064 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:11.644209 kubelet[2416]: E0114 13:22:11.644140 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:11.644209 kubelet[2416]: E0114 13:22:11.644170 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:11.644760 kubelet[2416]: E0114 13:22:11.644709 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:11.885857 kubelet[2416]: E0114 13:22:11.885720 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:12.015629 containerd[1714]: time="2025-01-14T13:22:12.015566324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 14 13:22:12.016814 kubelet[2416]: I0114 13:22:12.016491 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f" Jan 14 13:22:12.017155 containerd[1714]: time="2025-01-14T13:22:12.017109940Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:12.017369 containerd[1714]: time="2025-01-14T13:22:12.017329843Z" level=info msg="Ensure that sandbox 6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f in task-service has been cleanup successfully" Jan 14 13:22:12.019443 containerd[1714]: time="2025-01-14T13:22:12.017692147Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:12.019443 containerd[1714]: time="2025-01-14T13:22:12.017717647Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:12.019906 containerd[1714]: time="2025-01-14T13:22:12.019879170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:1,}" Jan 14 13:22:12.020858 systemd[1]: run-netns-cni\x2dffc8c5b4\x2dba2c\x2de5d3\x2da7ff\x2d9fd3b48454ab.mount: Deactivated successfully. Jan 14 13:22:12.106063 containerd[1714]: time="2025-01-14T13:22:12.106009692Z" level=error msg="Failed to destroy network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:12.106371 containerd[1714]: time="2025-01-14T13:22:12.106338196Z" level=error msg="encountered an error cleaning up failed sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:12.106467 containerd[1714]: time="2025-01-14T13:22:12.106406296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:12.107524 kubelet[2416]: E0114 13:22:12.106668 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:12.107524 kubelet[2416]: E0114 13:22:12.106727 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:12.107524 kubelet[2416]: E0114 13:22:12.106750 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:12.107787 kubelet[2416]: E0114 13:22:12.106809 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:12.577742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27-shm.mount: Deactivated successfully. Jan 14 13:22:12.886889 kubelet[2416]: E0114 13:22:12.886746 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:13.019768 kubelet[2416]: I0114 13:22:13.019690 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27" Jan 14 13:22:13.020544 containerd[1714]: time="2025-01-14T13:22:13.020356082Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:13.021366 containerd[1714]: time="2025-01-14T13:22:13.021122690Z" level=info msg="Ensure that sandbox 5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27 in task-service has been cleanup successfully" Jan 14 13:22:13.025626 systemd[1]: run-netns-cni\x2d0508a724\x2dd33b\x2deafc\x2d5f67\x2d41c5b04fa9ad.mount: Deactivated successfully. Jan 14 13:22:13.026840 containerd[1714]: time="2025-01-14T13:22:13.026474247Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:13.026840 containerd[1714]: time="2025-01-14T13:22:13.026498447Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:13.028223 containerd[1714]: time="2025-01-14T13:22:13.027443457Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:13.028223 containerd[1714]: time="2025-01-14T13:22:13.027572159Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:13.028223 containerd[1714]: time="2025-01-14T13:22:13.027594759Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:13.029398 containerd[1714]: time="2025-01-14T13:22:13.029191576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:2,}" Jan 14 13:22:13.117977 containerd[1714]: time="2025-01-14T13:22:13.117925426Z" level=error msg="Failed to destroy network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.120080 containerd[1714]: time="2025-01-14T13:22:13.119864247Z" level=error msg="encountered an error cleaning up failed sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.120080 containerd[1714]: time="2025-01-14T13:22:13.119942548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.120006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979-shm.mount: Deactivated successfully. Jan 14 13:22:13.120313 kubelet[2416]: E0114 13:22:13.120179 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.120313 kubelet[2416]: E0114 13:22:13.120238 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:13.120313 kubelet[2416]: E0114 13:22:13.120268 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:13.120442 kubelet[2416]: E0114 13:22:13.120328 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:13.250193 kubelet[2416]: I0114 13:22:13.249175 2416 topology_manager.go:215] "Topology Admit Handler" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" podNamespace="default" podName="nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:13.255316 systemd[1]: Created slice kubepods-besteffort-pod1899a804_b6e3_499b_b1f7_287fc28347a1.slice - libcontainer container kubepods-besteffort-pod1899a804_b6e3_499b_b1f7_287fc28347a1.slice. Jan 14 13:22:13.314268 kubelet[2416]: I0114 13:22:13.314202 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxml\" (UniqueName: \"kubernetes.io/projected/1899a804-b6e3-499b-b1f7-287fc28347a1-kube-api-access-wvxml\") pod \"nginx-deployment-6d5f899847-7ql46\" (UID: \"1899a804-b6e3-499b-b1f7-287fc28347a1\") " pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:13.560294 containerd[1714]: time="2025-01-14T13:22:13.560246962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:0,}" Jan 14 13:22:13.693726 containerd[1714]: time="2025-01-14T13:22:13.693174185Z" level=error msg="Failed to destroy network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.693726 containerd[1714]: time="2025-01-14T13:22:13.693527989Z" level=error msg="encountered an error cleaning up failed sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.693726 containerd[1714]: time="2025-01-14T13:22:13.693597290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.694645 kubelet[2416]: E0114 13:22:13.694197 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:13.694645 kubelet[2416]: E0114 13:22:13.694276 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:13.694645 kubelet[2416]: E0114 13:22:13.694303 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:13.694858 kubelet[2416]: E0114 13:22:13.694371 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:13.697136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00-shm.mount: Deactivated successfully. Jan 14 13:22:13.887530 kubelet[2416]: E0114 13:22:13.887405 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:14.022206 kubelet[2416]: I0114 13:22:14.022170 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979" Jan 14 13:22:14.022929 containerd[1714]: time="2025-01-14T13:22:14.022893415Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:14.023337 containerd[1714]: time="2025-01-14T13:22:14.023133418Z" level=info msg="Ensure that sandbox 835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979 in task-service has been cleanup successfully" Jan 14 13:22:14.023391 containerd[1714]: time="2025-01-14T13:22:14.023337420Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:14.023391 containerd[1714]: time="2025-01-14T13:22:14.023358020Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.023932326Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.024032727Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.024046827Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.024427232Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.024514032Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.024528033Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:14.027676 containerd[1714]: time="2025-01-14T13:22:14.025341441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:3,}" Jan 14 13:22:14.028041 kubelet[2416]: I0114 13:22:14.024744 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00" Jan 14 13:22:14.027687 systemd[1]: run-netns-cni\x2da4fd8dc3\x2d70e8\x2dc4d2\x2d330c\x2d02045f0ba56e.mount: Deactivated successfully. Jan 14 13:22:14.039312 containerd[1714]: time="2025-01-14T13:22:14.039277191Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:14.039779 containerd[1714]: time="2025-01-14T13:22:14.039754896Z" level=info msg="Ensure that sandbox d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00 in task-service has been cleanup successfully" Jan 14 13:22:14.040453 containerd[1714]: time="2025-01-14T13:22:14.040430003Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:14.040564 containerd[1714]: time="2025-01-14T13:22:14.040547104Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:14.041684 containerd[1714]: time="2025-01-14T13:22:14.041659316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:1,}" Jan 14 13:22:14.200871 containerd[1714]: time="2025-01-14T13:22:14.200726919Z" level=error msg="Failed to destroy network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.203120 containerd[1714]: time="2025-01-14T13:22:14.201074623Z" level=error msg="encountered an error cleaning up failed sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.203120 containerd[1714]: time="2025-01-14T13:22:14.201152224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.203307 kubelet[2416]: E0114 13:22:14.202863 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.203307 kubelet[2416]: E0114 13:22:14.202946 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:14.203307 kubelet[2416]: E0114 13:22:14.202989 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:14.203472 kubelet[2416]: E0114 13:22:14.203081 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:14.213939 containerd[1714]: time="2025-01-14T13:22:14.213841760Z" level=error msg="Failed to destroy network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.214222 containerd[1714]: time="2025-01-14T13:22:14.214189163Z" level=error msg="encountered an error cleaning up failed sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.214303 containerd[1714]: time="2025-01-14T13:22:14.214267264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.214544 kubelet[2416]: E0114 13:22:14.214524 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:14.214731 kubelet[2416]: E0114 13:22:14.214706 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:14.214880 kubelet[2416]: E0114 13:22:14.214825 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:14.215007 kubelet[2416]: E0114 13:22:14.214973 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:14.578755 systemd[1]: run-netns-cni\x2d01b760bd\x2d1ebe\x2d4326\x2d5637\x2d866449b79f1b.mount: Deactivated successfully. Jan 14 13:22:14.887840 kubelet[2416]: E0114 13:22:14.887685 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:15.027515 kubelet[2416]: I0114 13:22:15.027481 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522" Jan 14 13:22:15.029290 containerd[1714]: time="2025-01-14T13:22:15.028252879Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:15.029290 containerd[1714]: time="2025-01-14T13:22:15.028539982Z" level=info msg="Ensure that sandbox fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522 in task-service has been cleanup successfully" Jan 14 13:22:15.031164 containerd[1714]: time="2025-01-14T13:22:15.031121210Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:15.031164 containerd[1714]: time="2025-01-14T13:22:15.031157010Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:15.031716 containerd[1714]: time="2025-01-14T13:22:15.031542014Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:15.031716 containerd[1714]: time="2025-01-14T13:22:15.031652815Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:15.031716 containerd[1714]: time="2025-01-14T13:22:15.031671116Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:15.033493 containerd[1714]: time="2025-01-14T13:22:15.032352823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:2,}" Jan 14 13:22:15.033579 kubelet[2416]: I0114 13:22:15.032731 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930" Jan 14 13:22:15.032568 systemd[1]: run-netns-cni\x2d662a0708\x2de7f1\x2d3883\x2d4e02\x2d011ec1c2be80.mount: Deactivated successfully. Jan 14 13:22:15.033922 containerd[1714]: time="2025-01-14T13:22:15.033687737Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:15.033922 containerd[1714]: time="2025-01-14T13:22:15.033892739Z" level=info msg="Ensure that sandbox 2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930 in task-service has been cleanup successfully" Jan 14 13:22:15.035096 containerd[1714]: time="2025-01-14T13:22:15.034910050Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:15.035096 containerd[1714]: time="2025-01-14T13:22:15.034929550Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:15.035928 containerd[1714]: time="2025-01-14T13:22:15.035737859Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:15.035928 containerd[1714]: time="2025-01-14T13:22:15.035823960Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:15.035928 containerd[1714]: time="2025-01-14T13:22:15.035883961Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:15.036557 systemd[1]: run-netns-cni\x2dfafc47c2\x2d3a48\x2de5fc\x2d9d9a\x2d2db207068aa6.mount: Deactivated successfully. Jan 14 13:22:15.038370 containerd[1714]: time="2025-01-14T13:22:15.038198685Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:15.038370 containerd[1714]: time="2025-01-14T13:22:15.038287586Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:15.038370 containerd[1714]: time="2025-01-14T13:22:15.038303487Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:15.038589 containerd[1714]: time="2025-01-14T13:22:15.038565589Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:15.038739 containerd[1714]: time="2025-01-14T13:22:15.038717691Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:15.038788 containerd[1714]: time="2025-01-14T13:22:15.038739691Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:15.039258 containerd[1714]: time="2025-01-14T13:22:15.039231196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:4,}" Jan 14 13:22:15.477847 containerd[1714]: time="2025-01-14T13:22:15.477790192Z" level=error msg="Failed to destroy network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.479583 containerd[1714]: time="2025-01-14T13:22:15.478508000Z" level=error msg="encountered an error cleaning up failed sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.479868 containerd[1714]: time="2025-01-14T13:22:15.478601401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.480447 kubelet[2416]: E0114 13:22:15.480413 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.481037 kubelet[2416]: E0114 13:22:15.481010 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:15.481218 kubelet[2416]: E0114 13:22:15.481202 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:15.481398 kubelet[2416]: E0114 13:22:15.481386 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:15.504524 containerd[1714]: time="2025-01-14T13:22:15.504474278Z" level=error msg="Failed to destroy network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.505090 containerd[1714]: time="2025-01-14T13:22:15.505051784Z" level=error msg="encountered an error cleaning up failed sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.505316 containerd[1714]: time="2025-01-14T13:22:15.505284486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.505793 kubelet[2416]: E0114 13:22:15.505768 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:15.506007 kubelet[2416]: E0114 13:22:15.505991 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:15.506142 kubelet[2416]: E0114 13:22:15.506131 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:15.506512 kubelet[2416]: E0114 13:22:15.506491 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:15.581553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166-shm.mount: Deactivated successfully. Jan 14 13:22:15.888922 kubelet[2416]: E0114 13:22:15.888853 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:16.038237 kubelet[2416]: I0114 13:22:16.038204 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145" Jan 14 13:22:16.039372 containerd[1714]: time="2025-01-14T13:22:16.039227803Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:16.039788 containerd[1714]: time="2025-01-14T13:22:16.039719108Z" level=info msg="Ensure that sandbox 69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145 in task-service has been cleanup successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.040151013Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.040659318Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.041146223Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.041237924Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.041251625Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042461437Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042547938Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042561839Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042816341Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042897642Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.042910842Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.043220746Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.043305547Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:16.043680 containerd[1714]: time="2025-01-14T13:22:16.043321347Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:16.044959 systemd[1]: run-netns-cni\x2df2f2e4d0\x2debec\x2dd3e2\x2d4702\x2de667c9d901e4.mount: Deactivated successfully. Jan 14 13:22:16.046451 containerd[1714]: time="2025-01-14T13:22:16.046114877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:5,}" Jan 14 13:22:16.047074 kubelet[2416]: I0114 13:22:16.046947 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166" Jan 14 13:22:16.051239 containerd[1714]: time="2025-01-14T13:22:16.050517324Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:16.051239 containerd[1714]: time="2025-01-14T13:22:16.050731426Z" level=info msg="Ensure that sandbox 3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166 in task-service has been cleanup successfully" Jan 14 13:22:16.051239 containerd[1714]: time="2025-01-14T13:22:16.050878728Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:16.051239 containerd[1714]: time="2025-01-14T13:22:16.050897828Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:16.053415 containerd[1714]: time="2025-01-14T13:22:16.053056351Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:16.053415 containerd[1714]: time="2025-01-14T13:22:16.053142252Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:16.053415 containerd[1714]: time="2025-01-14T13:22:16.053162052Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:16.054427 containerd[1714]: time="2025-01-14T13:22:16.054403865Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:16.054834 containerd[1714]: time="2025-01-14T13:22:16.054809370Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:16.054986 containerd[1714]: time="2025-01-14T13:22:16.054927171Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:16.056009 systemd[1]: run-netns-cni\x2dd5eef94b\x2dbb5f\x2da5e4\x2dc475\x2d01800aa410e9.mount: Deactivated successfully. Jan 14 13:22:16.056487 containerd[1714]: time="2025-01-14T13:22:16.056434887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:3,}" Jan 14 13:22:16.233022 containerd[1714]: time="2025-01-14T13:22:16.232888876Z" level=error msg="Failed to destroy network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.235148 containerd[1714]: time="2025-01-14T13:22:16.235106000Z" level=error msg="encountered an error cleaning up failed sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.235508 containerd[1714]: time="2025-01-14T13:22:16.235477104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.235924 kubelet[2416]: E0114 13:22:16.235897 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.236029 kubelet[2416]: E0114 13:22:16.235970 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:16.236029 kubelet[2416]: E0114 13:22:16.236014 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:16.236130 kubelet[2416]: E0114 13:22:16.236084 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:16.239827 containerd[1714]: time="2025-01-14T13:22:16.239641849Z" level=error msg="Failed to destroy network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.239976 containerd[1714]: time="2025-01-14T13:22:16.239946052Z" level=error msg="encountered an error cleaning up failed sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.240036 containerd[1714]: time="2025-01-14T13:22:16.240016353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.240341 kubelet[2416]: E0114 13:22:16.240313 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:16.240575 kubelet[2416]: E0114 13:22:16.240378 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:16.240953 kubelet[2416]: E0114 13:22:16.240603 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:16.240953 kubelet[2416]: E0114 13:22:16.240724 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:16.581358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977-shm.mount: Deactivated successfully. Jan 14 13:22:16.876087 kubelet[2416]: E0114 13:22:16.875949 2416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:16.889046 kubelet[2416]: E0114 13:22:16.889003 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:17.054146 kubelet[2416]: I0114 13:22:17.053767 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977" Jan 14 13:22:17.054949 containerd[1714]: time="2025-01-14T13:22:17.054864677Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:17.055465 containerd[1714]: time="2025-01-14T13:22:17.055297481Z" level=info msg="Ensure that sandbox 7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977 in task-service has been cleanup successfully" Jan 14 13:22:17.055517 containerd[1714]: time="2025-01-14T13:22:17.055477683Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:17.055517 containerd[1714]: time="2025-01-14T13:22:17.055496683Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:17.056374 containerd[1714]: time="2025-01-14T13:22:17.055954988Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:17.056374 containerd[1714]: time="2025-01-14T13:22:17.056065890Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:17.056374 containerd[1714]: time="2025-01-14T13:22:17.056082090Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.056932499Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.057028700Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.057041800Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.057679807Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.057761808Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.057774108Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058156612Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058250613Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058265513Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058646817Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058734718Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.058747418Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:17.060792 containerd[1714]: time="2025-01-14T13:22:17.060130933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:6,}" Jan 14 13:22:17.061297 kubelet[2416]: I0114 13:22:17.059018 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87" Jan 14 13:22:17.061832 containerd[1714]: time="2025-01-14T13:22:17.061585449Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:17.061832 containerd[1714]: time="2025-01-14T13:22:17.061825151Z" level=info msg="Ensure that sandbox cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87 in task-service has been cleanup successfully" Jan 14 13:22:17.062266 systemd[1]: run-netns-cni\x2d875993f0\x2d181c\x2d924a\x2dfa89\x2d7d3fedd52228.mount: Deactivated successfully. Jan 14 13:22:17.062556 containerd[1714]: time="2025-01-14T13:22:17.062499658Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:17.062556 containerd[1714]: time="2025-01-14T13:22:17.062520659Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:17.066021 containerd[1714]: time="2025-01-14T13:22:17.062827962Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:17.066021 containerd[1714]: time="2025-01-14T13:22:17.062925463Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:17.066021 containerd[1714]: time="2025-01-14T13:22:17.062941463Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:17.069007 systemd[1]: run-netns-cni\x2dc2e0fca6\x2d512b\x2d443b\x2d4273\x2d583c2c44d987.mount: Deactivated successfully. Jan 14 13:22:17.070819 containerd[1714]: time="2025-01-14T13:22:17.070791247Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:17.070900 containerd[1714]: time="2025-01-14T13:22:17.070881448Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:17.070900 containerd[1714]: time="2025-01-14T13:22:17.070896148Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:17.074122 containerd[1714]: time="2025-01-14T13:22:17.073986681Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:17.074122 containerd[1714]: time="2025-01-14T13:22:17.074096583Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:17.074243 containerd[1714]: time="2025-01-14T13:22:17.074110083Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:17.075271 containerd[1714]: time="2025-01-14T13:22:17.075245595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:4,}" Jan 14 13:22:17.241882 containerd[1714]: time="2025-01-14T13:22:17.241738677Z" level=error msg="Failed to destroy network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.243685 containerd[1714]: time="2025-01-14T13:22:17.242889390Z" level=error msg="encountered an error cleaning up failed sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.243685 containerd[1714]: time="2025-01-14T13:22:17.242975091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.243893 kubelet[2416]: E0114 13:22:17.243238 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.243893 kubelet[2416]: E0114 13:22:17.243305 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:17.243893 kubelet[2416]: E0114 13:22:17.243334 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:17.244044 kubelet[2416]: E0114 13:22:17.243401 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:17.255898 containerd[1714]: time="2025-01-14T13:22:17.255504905Z" level=error msg="Failed to destroy network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.256315 containerd[1714]: time="2025-01-14T13:22:17.256279911Z" level=error msg="encountered an error cleaning up failed sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.256481 containerd[1714]: time="2025-01-14T13:22:17.256455613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.257112 kubelet[2416]: E0114 13:22:17.256884 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:17.257112 kubelet[2416]: E0114 13:22:17.256943 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:17.257112 kubelet[2416]: E0114 13:22:17.257014 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:17.257666 kubelet[2416]: E0114 13:22:17.257548 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:17.581038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90-shm.mount: Deactivated successfully. Jan 14 13:22:17.889991 kubelet[2416]: E0114 13:22:17.889867 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:18.066557 kubelet[2416]: I0114 13:22:18.065735 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90" Jan 14 13:22:18.066997 containerd[1714]: time="2025-01-14T13:22:18.066965974Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:18.067646 containerd[1714]: time="2025-01-14T13:22:18.067602680Z" level=info msg="Ensure that sandbox f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90 in task-service has been cleanup successfully" Jan 14 13:22:18.067949 containerd[1714]: time="2025-01-14T13:22:18.067908582Z" level=info msg="TearDown network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" successfully" Jan 14 13:22:18.068039 containerd[1714]: time="2025-01-14T13:22:18.068025083Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" returns successfully" Jan 14 13:22:18.070806 containerd[1714]: time="2025-01-14T13:22:18.070780907Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:18.071020 containerd[1714]: time="2025-01-14T13:22:18.070999108Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:18.071527 systemd[1]: run-netns-cni\x2d4315c721\x2d6464\x2d56ca\x2dc9c4\x2d2bb3120c80b2.mount: Deactivated successfully. Jan 14 13:22:18.074276 containerd[1714]: time="2025-01-14T13:22:18.071345611Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:18.075064 containerd[1714]: time="2025-01-14T13:22:18.075043243Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:18.075226 containerd[1714]: time="2025-01-14T13:22:18.075210144Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:18.075305 containerd[1714]: time="2025-01-14T13:22:18.075291245Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:18.075809 containerd[1714]: time="2025-01-14T13:22:18.075787949Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:18.075997 containerd[1714]: time="2025-01-14T13:22:18.075979151Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:18.076071 containerd[1714]: time="2025-01-14T13:22:18.076058951Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:18.076605 containerd[1714]: time="2025-01-14T13:22:18.076582756Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:18.076818 containerd[1714]: time="2025-01-14T13:22:18.076798658Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:18.076892 containerd[1714]: time="2025-01-14T13:22:18.076880358Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:18.077597 containerd[1714]: time="2025-01-14T13:22:18.077572464Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:18.077890 containerd[1714]: time="2025-01-14T13:22:18.077864967Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:18.078658 containerd[1714]: time="2025-01-14T13:22:18.077890167Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:18.078731 kubelet[2416]: I0114 13:22:18.078102 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf" Jan 14 13:22:18.079208 containerd[1714]: time="2025-01-14T13:22:18.079150778Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:18.079280 containerd[1714]: time="2025-01-14T13:22:18.079248978Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:18.079280 containerd[1714]: time="2025-01-14T13:22:18.079265478Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:18.079362 containerd[1714]: time="2025-01-14T13:22:18.079332279Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:18.079540 containerd[1714]: time="2025-01-14T13:22:18.079514881Z" level=info msg="Ensure that sandbox c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf in task-service has been cleanup successfully" Jan 14 13:22:18.081931 containerd[1714]: time="2025-01-14T13:22:18.080201186Z" level=info msg="TearDown network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" successfully" Jan 14 13:22:18.081931 containerd[1714]: time="2025-01-14T13:22:18.080222287Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" returns successfully" Jan 14 13:22:18.083043 containerd[1714]: time="2025-01-14T13:22:18.082077902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:7,}" Jan 14 13:22:18.083043 containerd[1714]: time="2025-01-14T13:22:18.082508806Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:18.083043 containerd[1714]: time="2025-01-14T13:22:18.082604407Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:18.083043 containerd[1714]: time="2025-01-14T13:22:18.082639807Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:18.082651 systemd[1]: run-netns-cni\x2ded7f334d\x2dee9b\x2df5ab\x2d1a53\x2d33c7393114df.mount: Deactivated successfully. Jan 14 13:22:18.084958 containerd[1714]: time="2025-01-14T13:22:18.083671916Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:18.084958 containerd[1714]: time="2025-01-14T13:22:18.083759517Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:18.084958 containerd[1714]: time="2025-01-14T13:22:18.083773917Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:18.084958 containerd[1714]: time="2025-01-14T13:22:18.084931926Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:18.085459 containerd[1714]: time="2025-01-14T13:22:18.085014427Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:18.085459 containerd[1714]: time="2025-01-14T13:22:18.085028127Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:18.086164 containerd[1714]: time="2025-01-14T13:22:18.085950535Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:18.086164 containerd[1714]: time="2025-01-14T13:22:18.086064536Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:18.086164 containerd[1714]: time="2025-01-14T13:22:18.086080036Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:18.088278 containerd[1714]: time="2025-01-14T13:22:18.088248055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:5,}" Jan 14 13:22:18.248342 containerd[1714]: time="2025-01-14T13:22:18.247856806Z" level=error msg="Failed to destroy network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.249089 containerd[1714]: time="2025-01-14T13:22:18.249034016Z" level=error msg="encountered an error cleaning up failed sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.249862 containerd[1714]: time="2025-01-14T13:22:18.249125116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.251241 kubelet[2416]: E0114 13:22:18.249412 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.251241 kubelet[2416]: E0114 13:22:18.250395 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:18.252001 kubelet[2416]: E0114 13:22:18.251428 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:18.252001 kubelet[2416]: E0114 13:22:18.251550 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:18.270556 containerd[1714]: time="2025-01-14T13:22:18.270212195Z" level=error msg="Failed to destroy network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.270719 containerd[1714]: time="2025-01-14T13:22:18.270593398Z" level=error msg="encountered an error cleaning up failed sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.270719 containerd[1714]: time="2025-01-14T13:22:18.270680599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.271116 kubelet[2416]: E0114 13:22:18.271053 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:18.271386 kubelet[2416]: E0114 13:22:18.271153 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:18.271386 kubelet[2416]: E0114 13:22:18.271181 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:18.271386 kubelet[2416]: E0114 13:22:18.271256 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:18.583081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9-shm.mount: Deactivated successfully. Jan 14 13:22:18.583402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b-shm.mount: Deactivated successfully. Jan 14 13:22:18.803641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382027027.mount: Deactivated successfully. Jan 14 13:22:18.856097 containerd[1714]: time="2025-01-14T13:22:18.855959854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:18.857579 containerd[1714]: time="2025-01-14T13:22:18.857519967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 14 13:22:18.860915 containerd[1714]: time="2025-01-14T13:22:18.860858895Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:18.864396 containerd[1714]: time="2025-01-14T13:22:18.864362725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:18.865067 containerd[1714]: time="2025-01-14T13:22:18.864908529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.849294005s" Jan 14 13:22:18.865067 containerd[1714]: time="2025-01-14T13:22:18.864949730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 14 13:22:18.878489 containerd[1714]: time="2025-01-14T13:22:18.878123941Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:22:18.890871 kubelet[2416]: E0114 13:22:18.890775 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:18.911920 containerd[1714]: time="2025-01-14T13:22:18.911878127Z" level=info msg="CreateContainer within sandbox \"de225026e932488e6ed28d978e2a8b2ea780526b4fc32a423081517d7b1eab67\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d46c586ad3350bd3d1aeafa7e91eade1fcf141b2907107e910c89ec937acd058\"" Jan 14 13:22:18.912537 containerd[1714]: time="2025-01-14T13:22:18.912417532Z" level=info msg="StartContainer for \"d46c586ad3350bd3d1aeafa7e91eade1fcf141b2907107e910c89ec937acd058\"" Jan 14 13:22:18.938780 systemd[1]: Started cri-containerd-d46c586ad3350bd3d1aeafa7e91eade1fcf141b2907107e910c89ec937acd058.scope - libcontainer container d46c586ad3350bd3d1aeafa7e91eade1fcf141b2907107e910c89ec937acd058. Jan 14 13:22:18.976330 containerd[1714]: time="2025-01-14T13:22:18.976183371Z" level=info msg="StartContainer for \"d46c586ad3350bd3d1aeafa7e91eade1fcf141b2907107e910c89ec937acd058\" returns successfully" Jan 14 13:22:19.082283 kubelet[2416]: I0114 13:22:19.082236 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9" Jan 14 13:22:19.083655 containerd[1714]: time="2025-01-14T13:22:19.083242155Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" Jan 14 13:22:19.083655 containerd[1714]: time="2025-01-14T13:22:19.083489659Z" level=info msg="Ensure that sandbox 50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9 in task-service has been cleanup successfully" Jan 14 13:22:19.084224 containerd[1714]: time="2025-01-14T13:22:19.084138869Z" level=info msg="TearDown network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" successfully" Jan 14 13:22:19.084224 containerd[1714]: time="2025-01-14T13:22:19.084184570Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" returns successfully" Jan 14 13:22:19.085308 containerd[1714]: time="2025-01-14T13:22:19.084537776Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:19.085308 containerd[1714]: time="2025-01-14T13:22:19.084649877Z" level=info msg="TearDown network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" successfully" Jan 14 13:22:19.085308 containerd[1714]: time="2025-01-14T13:22:19.084666778Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" returns successfully" Jan 14 13:22:19.085308 containerd[1714]: time="2025-01-14T13:22:19.084959182Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:19.086914 containerd[1714]: time="2025-01-14T13:22:19.085627193Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:19.086914 containerd[1714]: time="2025-01-14T13:22:19.085650894Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:19.087075 containerd[1714]: time="2025-01-14T13:22:19.087049116Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:19.087179 containerd[1714]: time="2025-01-14T13:22:19.087158618Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:19.087224 containerd[1714]: time="2025-01-14T13:22:19.087178818Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:19.087891 containerd[1714]: time="2025-01-14T13:22:19.087855229Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:19.088308 containerd[1714]: time="2025-01-14T13:22:19.088286036Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:19.088415 containerd[1714]: time="2025-01-14T13:22:19.088400238Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:19.088922 containerd[1714]: time="2025-01-14T13:22:19.088897246Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:19.089332 containerd[1714]: time="2025-01-14T13:22:19.089310053Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:19.089777 containerd[1714]: time="2025-01-14T13:22:19.089693059Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:19.091177 containerd[1714]: time="2025-01-14T13:22:19.090837278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:6,}" Jan 14 13:22:19.092490 kubelet[2416]: I0114 13:22:19.092466 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b" Jan 14 13:22:19.093981 containerd[1714]: time="2025-01-14T13:22:19.093958328Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" Jan 14 13:22:19.094437 containerd[1714]: time="2025-01-14T13:22:19.094412136Z" level=info msg="Ensure that sandbox 17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b in task-service has been cleanup successfully" Jan 14 13:22:19.095206 containerd[1714]: time="2025-01-14T13:22:19.094736341Z" level=info msg="TearDown network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" successfully" Jan 14 13:22:19.095206 containerd[1714]: time="2025-01-14T13:22:19.094774742Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" returns successfully" Jan 14 13:22:19.095632 containerd[1714]: time="2025-01-14T13:22:19.095593155Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:19.095813 containerd[1714]: time="2025-01-14T13:22:19.095792258Z" level=info msg="TearDown network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" successfully" Jan 14 13:22:19.095893 containerd[1714]: time="2025-01-14T13:22:19.095877360Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" returns successfully" Jan 14 13:22:19.096255 containerd[1714]: time="2025-01-14T13:22:19.096232765Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:19.096422 containerd[1714]: time="2025-01-14T13:22:19.096402968Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:19.096518 containerd[1714]: time="2025-01-14T13:22:19.096502770Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:19.097288 containerd[1714]: time="2025-01-14T13:22:19.096861776Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:19.097288 containerd[1714]: time="2025-01-14T13:22:19.096961077Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:19.097288 containerd[1714]: time="2025-01-14T13:22:19.096975377Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:19.097792 containerd[1714]: time="2025-01-14T13:22:19.097739290Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:19.098035 containerd[1714]: time="2025-01-14T13:22:19.097936293Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:19.098035 containerd[1714]: time="2025-01-14T13:22:19.098007394Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:19.098576 containerd[1714]: time="2025-01-14T13:22:19.098549403Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:19.099166 containerd[1714]: time="2025-01-14T13:22:19.098732206Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:19.099166 containerd[1714]: time="2025-01-14T13:22:19.098757106Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:19.099765 containerd[1714]: time="2025-01-14T13:22:19.099376416Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:19.099765 containerd[1714]: time="2025-01-14T13:22:19.099463418Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:19.099765 containerd[1714]: time="2025-01-14T13:22:19.099478418Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:19.100299 containerd[1714]: time="2025-01-14T13:22:19.100264931Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:19.100765 containerd[1714]: time="2025-01-14T13:22:19.100363232Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:19.100765 containerd[1714]: time="2025-01-14T13:22:19.100379833Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:19.101790 containerd[1714]: time="2025-01-14T13:22:19.101757555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:8,}" Jan 14 13:22:19.103897 kubelet[2416]: I0114 13:22:19.103808 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2kw87" podStartSLOduration=3.524651866 podStartE2EDuration="22.103743387s" podCreationTimestamp="2025-01-14 13:21:57 +0000 UTC" firstStartedPulling="2025-01-14 13:22:00.286298613 +0000 UTC m=+4.477970823" lastFinishedPulling="2025-01-14 13:22:18.865390134 +0000 UTC m=+23.057062344" observedRunningTime="2025-01-14 13:22:19.103031176 +0000 UTC m=+23.294703486" watchObservedRunningTime="2025-01-14 13:22:19.103743387 +0000 UTC m=+23.295415597" Jan 14 13:22:19.228486 containerd[1714]: time="2025-01-14T13:22:19.228225806Z" level=error msg="Failed to destroy network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.228982 containerd[1714]: time="2025-01-14T13:22:19.228573812Z" level=error msg="encountered an error cleaning up failed sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.228982 containerd[1714]: time="2025-01-14T13:22:19.228668113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.230258 kubelet[2416]: E0114 13:22:19.229990 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.230258 kubelet[2416]: E0114 13:22:19.230064 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:19.230258 kubelet[2416]: E0114 13:22:19.230093 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7ql46" Jan 14 13:22:19.230436 kubelet[2416]: E0114 13:22:19.230162 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7ql46_default(1899a804-b6e3-499b-b1f7-287fc28347a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7ql46" podUID="1899a804-b6e3-499b-b1f7-287fc28347a1" Jan 14 13:22:19.234878 containerd[1714]: time="2025-01-14T13:22:19.234844013Z" level=error msg="Failed to destroy network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.235148 containerd[1714]: time="2025-01-14T13:22:19.235108018Z" level=error msg="encountered an error cleaning up failed sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.235241 containerd[1714]: time="2025-01-14T13:22:19.235171919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.235404 kubelet[2416]: E0114 13:22:19.235375 2416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:22:19.235476 kubelet[2416]: E0114 13:22:19.235431 2416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:19.235476 kubelet[2416]: E0114 13:22:19.235459 2416 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl8qh" Jan 14 13:22:19.235568 kubelet[2416]: E0114 13:22:19.235521 2416 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl8qh_calico-system(cb9275d6-51d8-4705-9177-49dadf876371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl8qh" podUID="cb9275d6-51d8-4705-9177-49dadf876371" Jan 14 13:22:19.334547 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:22:19.334689 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:22:19.580998 systemd[1]: run-netns-cni\x2d6b5c60c2\x2df24a\x2d395f\x2dc22e\x2db44c96b9daa5.mount: Deactivated successfully. Jan 14 13:22:19.581100 systemd[1]: run-netns-cni\x2d9e13c90c\x2dde3c\x2d2308\x2d048f\x2d52a5a037f55e.mount: Deactivated successfully. Jan 14 13:22:19.891383 kubelet[2416]: E0114 13:22:19.891223 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:20.098769 kubelet[2416]: I0114 13:22:20.097918 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee" Jan 14 13:22:20.099468 containerd[1714]: time="2025-01-14T13:22:20.099014029Z" level=info msg="StopPodSandbox for \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\"" Jan 14 13:22:20.099468 containerd[1714]: time="2025-01-14T13:22:20.099293534Z" level=info msg="Ensure that sandbox da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee in task-service has been cleanup successfully" Jan 14 13:22:20.102602 containerd[1714]: time="2025-01-14T13:22:20.100119347Z" level=info msg="TearDown network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" successfully" Jan 14 13:22:20.102602 containerd[1714]: time="2025-01-14T13:22:20.100150848Z" level=info msg="StopPodSandbox for \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" returns successfully" Jan 14 13:22:20.103257 containerd[1714]: time="2025-01-14T13:22:20.103103396Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" Jan 14 13:22:20.103257 containerd[1714]: time="2025-01-14T13:22:20.103195797Z" level=info msg="TearDown network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" successfully" Jan 14 13:22:20.103257 containerd[1714]: time="2025-01-14T13:22:20.103211598Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" returns successfully" Jan 14 13:22:20.103886 containerd[1714]: time="2025-01-14T13:22:20.103583704Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:20.103886 containerd[1714]: time="2025-01-14T13:22:20.103692605Z" level=info msg="TearDown network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" successfully" Jan 14 13:22:20.103886 containerd[1714]: time="2025-01-14T13:22:20.103710006Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" returns successfully" Jan 14 13:22:20.104183 containerd[1714]: time="2025-01-14T13:22:20.104131312Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:20.104443 containerd[1714]: time="2025-01-14T13:22:20.104326316Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:20.104443 containerd[1714]: time="2025-01-14T13:22:20.104344116Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:20.106533 kubelet[2416]: I0114 13:22:20.104681 2416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7" Jan 14 13:22:20.105642 systemd[1]: run-netns-cni\x2d73a50694\x2d3bfe\x2d9cd6\x2d228f\x2d2ea2be155cbf.mount: Deactivated successfully. Jan 14 13:22:20.107478 containerd[1714]: time="2025-01-14T13:22:20.106968158Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:20.107478 containerd[1714]: time="2025-01-14T13:22:20.107063560Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:20.107478 containerd[1714]: time="2025-01-14T13:22:20.107077760Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:20.107478 containerd[1714]: time="2025-01-14T13:22:20.107147461Z" level=info msg="StopPodSandbox for \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\"" Jan 14 13:22:20.107478 containerd[1714]: time="2025-01-14T13:22:20.107359365Z" level=info msg="Ensure that sandbox 6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7 in task-service has been cleanup successfully" Jan 14 13:22:20.108108 containerd[1714]: time="2025-01-14T13:22:20.107913574Z" level=info msg="TearDown network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" successfully" Jan 14 13:22:20.108108 containerd[1714]: time="2025-01-14T13:22:20.107950674Z" level=info msg="StopPodSandbox for \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" returns successfully" Jan 14 13:22:20.108566 containerd[1714]: time="2025-01-14T13:22:20.108372481Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" Jan 14 13:22:20.108566 containerd[1714]: time="2025-01-14T13:22:20.108490183Z" level=info msg="TearDown network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" successfully" Jan 14 13:22:20.108566 containerd[1714]: time="2025-01-14T13:22:20.108506283Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" returns successfully" Jan 14 13:22:20.109938 containerd[1714]: time="2025-01-14T13:22:20.108822789Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110193911Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110213511Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110329113Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110407314Z" level=info msg="TearDown network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" successfully" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110421814Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" returns successfully" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.110555617Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:20.111689 containerd[1714]: time="2025-01-14T13:22:20.111683635Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:20.112001 containerd[1714]: time="2025-01-14T13:22:20.111703135Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:20.112219 containerd[1714]: time="2025-01-14T13:22:20.112196743Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:20.112426 systemd[1]: run-netns-cni\x2d20172eee\x2de4bb\x2d15c3\x2dd3a4\x2dfbb4f3ce6c99.mount: Deactivated successfully. Jan 14 13:22:20.113079 containerd[1714]: time="2025-01-14T13:22:20.112680051Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:20.113137 containerd[1714]: time="2025-01-14T13:22:20.113042757Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:20.113137 containerd[1714]: time="2025-01-14T13:22:20.113104258Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.113895371Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.113917771Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114499781Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114737084Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114830786Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114850186Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114834786Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.114920487Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:20.116131 containerd[1714]: time="2025-01-14T13:22:20.115732601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:9,}" Jan 14 13:22:20.117509 containerd[1714]: time="2025-01-14T13:22:20.117475029Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:20.118177 containerd[1714]: time="2025-01-14T13:22:20.118149840Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:20.118300 containerd[1714]: time="2025-01-14T13:22:20.118280342Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:20.122426 containerd[1714]: time="2025-01-14T13:22:20.122389709Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:20.122739 containerd[1714]: time="2025-01-14T13:22:20.122717814Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:20.122856 containerd[1714]: time="2025-01-14T13:22:20.122839016Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:20.123557 containerd[1714]: time="2025-01-14T13:22:20.123532427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:7,}" Jan 14 13:22:20.306722 systemd-networkd[1456]: calie2c0bc7f24c: Link UP Jan 14 13:22:20.306948 systemd-networkd[1456]: calie2c0bc7f24c: Gained carrier Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.214 [INFO][3648] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.224 [INFO][3648] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.31-k8s-csi--node--driver--hl8qh-eth0 csi-node-driver- calico-system cb9275d6-51d8-4705-9177-49dadf876371 1154 0 2025-01-14 13:21:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.31 csi-node-driver-hl8qh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2c0bc7f24c [] []}} ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.225 [INFO][3648] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.261 [INFO][3668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" HandleID="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Workload="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.271 [INFO][3668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" HandleID="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Workload="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319930), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.31", "pod":"csi-node-driver-hl8qh", "timestamp":"2025-01-14 13:22:20.261256361 +0000 UTC"}, Hostname:"10.200.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.271 [INFO][3668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.271 [INFO][3668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.271 [INFO][3668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.31' Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.272 [INFO][3668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.275 [INFO][3668] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.278 [INFO][3668] ipam/ipam.go 489: Trying affinity for 192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.280 [INFO][3668] ipam/ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.282 [INFO][3668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.282 [INFO][3668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.283 [INFO][3668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.287 [INFO][3668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.295 [INFO][3668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.22.1/26] block=192.168.22.0/26 handle="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.295 [INFO][3668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.1/26] handle="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" host="10.200.4.31" Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.296 [INFO][3668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:22:20.321542 containerd[1714]: 2025-01-14 13:22:20.296 [INFO][3668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.1/26] IPv6=[] ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" HandleID="k8s-pod-network.8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Workload="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.298 [INFO][3648] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-csi--node--driver--hl8qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb9275d6-51d8-4705-9177-49dadf876371", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"", Pod:"csi-node-driver-hl8qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2c0bc7f24c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.298 [INFO][3648] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.22.1/32] ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.298 [INFO][3648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2c0bc7f24c ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.305 [INFO][3648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.306 [INFO][3648] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-csi--node--driver--hl8qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb9275d6-51d8-4705-9177-49dadf876371", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d", Pod:"csi-node-driver-hl8qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2c0bc7f24c", MAC:"f6:cf:d5:44:f3:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:20.322519 containerd[1714]: 2025-01-14 13:22:20.320 [INFO][3648] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d" Namespace="calico-system" Pod="csi-node-driver-hl8qh" WorkloadEndpoint="10.200.4.31-k8s-csi--node--driver--hl8qh-eth0" Jan 14 13:22:20.347938 systemd-networkd[1456]: cali5c0d29fbb35: Link UP Jan 14 13:22:20.349752 containerd[1714]: time="2025-01-14T13:22:20.347843965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:20.349752 containerd[1714]: time="2025-01-14T13:22:20.347913166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:20.349752 containerd[1714]: time="2025-01-14T13:22:20.347934667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:20.349752 containerd[1714]: time="2025-01-14T13:22:20.348028068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:20.348697 systemd-networkd[1456]: cali5c0d29fbb35: Gained carrier Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.207 [INFO][3638] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.221 [INFO][3638] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0 nginx-deployment-6d5f899847- default 1899a804-b6e3-499b-b1f7-287fc28347a1 1238 0 2025-01-14 13:22:13 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.31 nginx-deployment-6d5f899847-7ql46 eth0 default [] [] [kns.default ksa.default.default] cali5c0d29fbb35 [] []}} ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.221 [INFO][3638] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.262 [INFO][3664] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" HandleID="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Workload="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.273 [INFO][3664] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" HandleID="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Workload="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051970), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.31", "pod":"nginx-deployment-6d5f899847-7ql46", "timestamp":"2025-01-14 13:22:20.262186476 +0000 UTC"}, Hostname:"10.200.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.273 [INFO][3664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.296 [INFO][3664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.296 [INFO][3664] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.31' Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.299 [INFO][3664] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.310 [INFO][3664] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.319 [INFO][3664] ipam/ipam.go 489: Trying affinity for 192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.323 [INFO][3664] ipam/ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.325 [INFO][3664] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.325 [INFO][3664] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.326 [INFO][3664] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7 Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.332 [INFO][3664] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.337 [INFO][3664] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.22.2/26] block=192.168.22.0/26 handle="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.337 [INFO][3664] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.2/26] handle="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" host="10.200.4.31" Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.337 [INFO][3664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:22:20.361799 containerd[1714]: 2025-01-14 13:22:20.337 [INFO][3664] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.2/26] IPv6=[] ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" HandleID="k8s-pod-network.d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Workload="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.340 [INFO][3638] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"1899a804-b6e3-499b-b1f7-287fc28347a1", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"", Pod:"nginx-deployment-6d5f899847-7ql46", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5c0d29fbb35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.341 [INFO][3638] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.22.2/32] ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.341 [INFO][3638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c0d29fbb35 ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.348 [INFO][3638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.349 [INFO][3638] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"1899a804-b6e3-499b-b1f7-287fc28347a1", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7", Pod:"nginx-deployment-6d5f899847-7ql46", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5c0d29fbb35", MAC:"4a:57:eb:49:c9:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:20.362816 containerd[1714]: 2025-01-14 13:22:20.358 [INFO][3638] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7" Namespace="default" Pod="nginx-deployment-6d5f899847-7ql46" WorkloadEndpoint="10.200.4.31-k8s-nginx--deployment--6d5f899847--7ql46-eth0" Jan 14 13:22:20.377874 systemd[1]: Started cri-containerd-8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d.scope - libcontainer container 8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d. Jan 14 13:22:20.392484 containerd[1714]: time="2025-01-14T13:22:20.392397488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:20.392724 containerd[1714]: time="2025-01-14T13:22:20.392683192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:20.392871 containerd[1714]: time="2025-01-14T13:22:20.392844495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:20.393123 containerd[1714]: time="2025-01-14T13:22:20.393079399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:20.414450 containerd[1714]: time="2025-01-14T13:22:20.414408245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl8qh,Uid:cb9275d6-51d8-4705-9177-49dadf876371,Namespace:calico-system,Attempt:9,} returns sandbox id \"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d\"" Jan 14 13:22:20.416134 containerd[1714]: time="2025-01-14T13:22:20.416105472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 14 13:22:20.416764 systemd[1]: Started cri-containerd-d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7.scope - libcontainer container d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7. Jan 14 13:22:20.454538 containerd[1714]: time="2025-01-14T13:22:20.454309892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7ql46,Uid:1899a804-b6e3-499b-b1f7-287fc28347a1,Namespace:default,Attempt:7,} returns sandbox id \"d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7\"" Jan 14 13:22:20.839756 kernel: bpftool[3880]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 14 13:22:20.892407 kubelet[2416]: E0114 13:22:20.892352 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:21.128958 systemd-networkd[1456]: vxlan.calico: Link UP Jan 14 13:22:21.129235 systemd-networkd[1456]: vxlan.calico: Gained carrier Jan 14 13:22:21.340686 systemd-networkd[1456]: calie2c0bc7f24c: Gained IPv6LL Jan 14 13:22:21.468797 systemd-networkd[1456]: cali5c0d29fbb35: Gained IPv6LL Jan 14 13:22:21.892971 kubelet[2416]: E0114 13:22:21.892896 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:21.973666 containerd[1714]: time="2025-01-14T13:22:21.973622634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:21.982390 containerd[1714]: time="2025-01-14T13:22:21.981690165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 14 13:22:21.982390 containerd[1714]: time="2025-01-14T13:22:21.982114672Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:21.988226 containerd[1714]: time="2025-01-14T13:22:21.988163170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:21.989291 containerd[1714]: time="2025-01-14T13:22:21.988786280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.572627107s" Jan 14 13:22:21.989291 containerd[1714]: time="2025-01-14T13:22:21.988818980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 14 13:22:21.990043 containerd[1714]: time="2025-01-14T13:22:21.989806896Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:22:21.990628 containerd[1714]: time="2025-01-14T13:22:21.990589609Z" level=info msg="CreateContainer within sandbox \"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 14 13:22:22.020008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604034961.mount: Deactivated successfully. Jan 14 13:22:22.033706 containerd[1714]: time="2025-01-14T13:22:22.033664708Z" level=info msg="CreateContainer within sandbox \"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78518bb68902707cd8179abcbad7a25e6e38684352637c9fb49908e0d26115e3\"" Jan 14 13:22:22.034238 containerd[1714]: time="2025-01-14T13:22:22.034203816Z" level=info msg="StartContainer for \"78518bb68902707cd8179abcbad7a25e6e38684352637c9fb49908e0d26115e3\"" Jan 14 13:22:22.070966 systemd[1]: Started cri-containerd-78518bb68902707cd8179abcbad7a25e6e38684352637c9fb49908e0d26115e3.scope - libcontainer container 78518bb68902707cd8179abcbad7a25e6e38684352637c9fb49908e0d26115e3. Jan 14 13:22:22.101742 containerd[1714]: time="2025-01-14T13:22:22.101259204Z" level=info msg="StartContainer for \"78518bb68902707cd8179abcbad7a25e6e38684352637c9fb49908e0d26115e3\" returns successfully" Jan 14 13:22:22.894068 kubelet[2416]: E0114 13:22:22.894008 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:23.134458 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Jan 14 13:22:23.895199 kubelet[2416]: E0114 13:22:23.895144 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:24.764631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253951967.mount: Deactivated successfully. Jan 14 13:22:24.896228 kubelet[2416]: E0114 13:22:24.896183 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:25.896733 kubelet[2416]: E0114 13:22:25.896682 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:25.974107 containerd[1714]: time="2025-01-14T13:22:25.974051927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:25.976646 containerd[1714]: time="2025-01-14T13:22:25.976478249Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 14 13:22:25.980438 containerd[1714]: time="2025-01-14T13:22:25.979375375Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:25.984418 containerd[1714]: time="2025-01-14T13:22:25.984383921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:25.986441 containerd[1714]: time="2025-01-14T13:22:25.985346330Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 3.995509832s" Jan 14 13:22:25.986441 containerd[1714]: time="2025-01-14T13:22:25.985382930Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:22:25.986776 containerd[1714]: time="2025-01-14T13:22:25.986755142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 14 13:22:25.987210 containerd[1714]: time="2025-01-14T13:22:25.987183446Z" level=info msg="CreateContainer within sandbox \"d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 14 13:22:26.023780 containerd[1714]: time="2025-01-14T13:22:26.023741379Z" level=info msg="CreateContainer within sandbox \"d2a3aff4694a5f8fb2eb115cc603d5d0af26cc3eb2d964b6ff0a2b3533cd4ac7\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb\"" Jan 14 13:22:26.024281 containerd[1714]: time="2025-01-14T13:22:26.024202183Z" level=info msg="StartContainer for \"fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb\"" Jan 14 13:22:26.051776 systemd[1]: run-containerd-runc-k8s.io-fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb-runc.4alwjQ.mount: Deactivated successfully. Jan 14 13:22:26.061772 systemd[1]: Started cri-containerd-fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb.scope - libcontainer container fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb. Jan 14 13:22:26.087565 containerd[1714]: time="2025-01-14T13:22:26.087469459Z" level=info msg="StartContainer for \"fe48086bde7e1fb37d4815f97306f5500c968e409e43e8e019a6989613f29acb\" returns successfully" Jan 14 13:22:26.896993 kubelet[2416]: E0114 13:22:26.896846 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:27.897874 kubelet[2416]: E0114 13:22:27.897818 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:28.898267 kubelet[2416]: E0114 13:22:28.898215 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:29.087009 containerd[1714]: time="2025-01-14T13:22:29.086954221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:29.088752 containerd[1714]: time="2025-01-14T13:22:29.088699238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 14 13:22:29.090808 containerd[1714]: time="2025-01-14T13:22:29.090756559Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:29.099691 containerd[1714]: time="2025-01-14T13:22:29.098999842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.112120599s" Jan 14 13:22:29.099691 containerd[1714]: time="2025-01-14T13:22:29.099042743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 14 13:22:29.099691 containerd[1714]: time="2025-01-14T13:22:29.099444447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:29.101595 containerd[1714]: time="2025-01-14T13:22:29.101565468Z" level=info msg="CreateContainer within sandbox \"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 14 13:22:29.146740 containerd[1714]: time="2025-01-14T13:22:29.146685422Z" level=info msg="CreateContainer within sandbox \"8963d0e7f1db4acb51368bbb38d1749a359579042fcb5efcfce2b033a8cadb9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bbbc67590aad4112e5384ef640fa3dfaabd179aa793be481436a16d0d2c7c7c1\"" Jan 14 13:22:29.147262 containerd[1714]: time="2025-01-14T13:22:29.147226728Z" level=info msg="StartContainer for \"bbbc67590aad4112e5384ef640fa3dfaabd179aa793be481436a16d0d2c7c7c1\"" Jan 14 13:22:29.185754 systemd[1]: Started cri-containerd-bbbc67590aad4112e5384ef640fa3dfaabd179aa793be481436a16d0d2c7c7c1.scope - libcontainer container bbbc67590aad4112e5384ef640fa3dfaabd179aa793be481436a16d0d2c7c7c1. Jan 14 13:22:29.215894 containerd[1714]: time="2025-01-14T13:22:29.215848319Z" level=info msg="StartContainer for \"bbbc67590aad4112e5384ef640fa3dfaabd179aa793be481436a16d0d2c7c7c1\" returns successfully" Jan 14 13:22:29.898483 kubelet[2416]: E0114 13:22:29.898419 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:29.985735 kubelet[2416]: I0114 13:22:29.985690 2416 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 14 13:22:29.985735 kubelet[2416]: I0114 13:22:29.985734 2416 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 14 13:22:30.208572 kubelet[2416]: I0114 13:22:30.208442 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hl8qh" podStartSLOduration=24.524179832 podStartE2EDuration="33.208401317s" podCreationTimestamp="2025-01-14 13:21:57 +0000 UTC" firstStartedPulling="2025-01-14 13:22:20.415841468 +0000 UTC m=+24.607513678" lastFinishedPulling="2025-01-14 13:22:29.100062953 +0000 UTC m=+33.291735163" observedRunningTime="2025-01-14 13:22:30.206805501 +0000 UTC m=+34.398477811" watchObservedRunningTime="2025-01-14 13:22:30.208401317 +0000 UTC m=+34.400073627" Jan 14 13:22:30.208818 kubelet[2416]: I0114 13:22:30.208585 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-7ql46" podStartSLOduration=11.678647803 podStartE2EDuration="17.208557119s" podCreationTimestamp="2025-01-14 13:22:13 +0000 UTC" firstStartedPulling="2025-01-14 13:22:20.455848317 +0000 UTC m=+24.647520527" lastFinishedPulling="2025-01-14 13:22:25.985757633 +0000 UTC m=+30.177429843" observedRunningTime="2025-01-14 13:22:26.163599452 +0000 UTC m=+30.355271662" watchObservedRunningTime="2025-01-14 13:22:30.208557119 +0000 UTC m=+34.400229429" Jan 14 13:22:30.899264 kubelet[2416]: E0114 13:22:30.899200 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:31.245189 kubelet[2416]: I0114 13:22:31.244947 2416 topology_manager.go:215] "Topology Admit Handler" podUID="6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4" podNamespace="default" podName="nfs-server-provisioner-0" Jan 14 13:22:31.250343 systemd[1]: Created slice kubepods-besteffort-pod6ed93a4b_e7a4_48ab_8831_ae1c7cb19ad4.slice - libcontainer container kubepods-besteffort-pod6ed93a4b_e7a4_48ab_8831_ae1c7cb19ad4.slice. Jan 14 13:22:31.414903 kubelet[2416]: I0114 13:22:31.414810 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbzs\" (UniqueName: \"kubernetes.io/projected/6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4-kube-api-access-dbbzs\") pod \"nfs-server-provisioner-0\" (UID: \"6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4\") " pod="default/nfs-server-provisioner-0" Jan 14 13:22:31.414903 kubelet[2416]: I0114 13:22:31.414897 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4-data\") pod \"nfs-server-provisioner-0\" (UID: \"6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4\") " pod="default/nfs-server-provisioner-0" Jan 14 13:22:31.554416 containerd[1714]: time="2025-01-14T13:22:31.554378975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4,Namespace:default,Attempt:0,}" Jan 14 13:22:31.694794 systemd-networkd[1456]: cali60e51b789ff: Link UP Jan 14 13:22:31.695048 systemd-networkd[1456]: cali60e51b789ff: Gained carrier Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.626 [INFO][4158] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.31-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4 1353 0 2025-01-14 13:22:31 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.31 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.626 [INFO][4158] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.651 [INFO][4168] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" HandleID="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Workload="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.659 [INFO][4168] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" HandleID="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Workload="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc1f0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.31", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-14 13:22:31.650998448 +0000 UTC"}, Hostname:"10.200.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.660 [INFO][4168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.660 [INFO][4168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.660 [INFO][4168] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.31' Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.661 [INFO][4168] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.664 [INFO][4168] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.671 [INFO][4168] ipam/ipam.go 489: Trying affinity for 192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.673 [INFO][4168] ipam/ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.674 [INFO][4168] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.674 [INFO][4168] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.676 [INFO][4168] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1 Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.684 [INFO][4168] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.689 [INFO][4168] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.22.3/26] block=192.168.22.0/26 handle="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.689 [INFO][4168] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.3/26] handle="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" host="10.200.4.31" Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.689 [INFO][4168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:22:31.713397 containerd[1714]: 2025-01-14 13:22:31.689 [INFO][4168] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.3/26] IPv6=[] ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" HandleID="k8s-pod-network.b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Workload="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.714363 containerd[1714]: 2025-01-14 13:22:31.691 [INFO][4158] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4", ResourceVersion:"1353", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:31.714363 containerd[1714]: 2025-01-14 13:22:31.691 [INFO][4158] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.22.3/32] ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.714363 containerd[1714]: 2025-01-14 13:22:31.691 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.714363 containerd[1714]: 2025-01-14 13:22:31.693 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.714689 containerd[1714]: 2025-01-14 13:22:31.693 [INFO][4158] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4", ResourceVersion:"1353", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"96:43:31:a3:e8:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:22:31.714689 containerd[1714]: 2025-01-14 13:22:31.712 [INFO][4158] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.31-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:22:31.741678 containerd[1714]: time="2025-01-14T13:22:31.741499960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:31.741678 containerd[1714]: time="2025-01-14T13:22:31.741570560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:31.742036 containerd[1714]: time="2025-01-14T13:22:31.741620661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:31.742036 containerd[1714]: time="2025-01-14T13:22:31.741916664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:31.770777 systemd[1]: Started cri-containerd-b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1.scope - libcontainer container b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1. Jan 14 13:22:31.810116 containerd[1714]: time="2025-01-14T13:22:31.810009050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ed93a4b-e7a4-48ab-8831-ae1c7cb19ad4,Namespace:default,Attempt:0,} returns sandbox id \"b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1\"" Jan 14 13:22:31.812762 containerd[1714]: time="2025-01-14T13:22:31.812503675Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 14 13:22:31.900193 kubelet[2416]: E0114 13:22:31.900135 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:32.901344 kubelet[2416]: E0114 13:22:32.901285 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:33.500779 systemd-networkd[1456]: cali60e51b789ff: Gained IPv6LL Jan 14 13:22:33.902113 kubelet[2416]: E0114 13:22:33.902068 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:34.387733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215374579.mount: Deactivated successfully. Jan 14 13:22:34.902748 kubelet[2416]: E0114 13:22:34.902703 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:35.903882 kubelet[2416]: E0114 13:22:35.903786 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:36.875773 kubelet[2416]: E0114 13:22:36.875717 2416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:36.905025 kubelet[2416]: E0114 13:22:36.904965 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:37.906142 kubelet[2416]: E0114 13:22:37.906045 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:38.906788 kubelet[2416]: E0114 13:22:38.906691 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:39.907634 kubelet[2416]: E0114 13:22:39.907562 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:40.908926 kubelet[2416]: E0114 13:22:40.908681 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:41.156548 containerd[1714]: time="2025-01-14T13:22:41.156483348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:41.158315 containerd[1714]: time="2025-01-14T13:22:41.158252998Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 14 13:22:41.162177 containerd[1714]: time="2025-01-14T13:22:41.162051506Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:41.166538 containerd[1714]: time="2025-01-14T13:22:41.166501632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:41.167565 containerd[1714]: time="2025-01-14T13:22:41.167417558Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 9.354876783s" Jan 14 13:22:41.167565 containerd[1714]: time="2025-01-14T13:22:41.167455159Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 14 13:22:41.169691 containerd[1714]: time="2025-01-14T13:22:41.169662522Z" level=info msg="CreateContainer within sandbox \"b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 14 13:22:41.197656 containerd[1714]: time="2025-01-14T13:22:41.197598713Z" level=info msg="CreateContainer within sandbox \"b6f227b2824b7bba0b63a163598692d52c94b68c4bf9d143df17e340ccf3a8e1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b7b42e1917e64a9940d7257b7bfb284581eb53b74f738fafad1b7601753ed004\"" Jan 14 13:22:41.198117 containerd[1714]: time="2025-01-14T13:22:41.198091327Z" level=info msg="StartContainer for \"b7b42e1917e64a9940d7257b7bfb284581eb53b74f738fafad1b7601753ed004\"" Jan 14 13:22:41.231991 systemd[1]: Started cri-containerd-b7b42e1917e64a9940d7257b7bfb284581eb53b74f738fafad1b7601753ed004.scope - libcontainer container b7b42e1917e64a9940d7257b7bfb284581eb53b74f738fafad1b7601753ed004. Jan 14 13:22:41.264177 containerd[1714]: time="2025-01-14T13:22:41.262426150Z" level=info msg="StartContainer for \"b7b42e1917e64a9940d7257b7bfb284581eb53b74f738fafad1b7601753ed004\" returns successfully" Jan 14 13:22:41.909708 kubelet[2416]: E0114 13:22:41.909592 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:42.234503 kubelet[2416]: I0114 13:22:42.234325 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8786846430000002 podStartE2EDuration="11.234286339s" podCreationTimestamp="2025-01-14 13:22:31 +0000 UTC" firstStartedPulling="2025-01-14 13:22:31.812115671 +0000 UTC m=+36.003787881" lastFinishedPulling="2025-01-14 13:22:41.167717367 +0000 UTC m=+45.359389577" observedRunningTime="2025-01-14 13:22:42.233970836 +0000 UTC m=+46.425643046" watchObservedRunningTime="2025-01-14 13:22:42.234286339 +0000 UTC m=+46.425958549" Jan 14 13:22:42.910850 kubelet[2416]: E0114 13:22:42.910789 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:43.911822 kubelet[2416]: E0114 13:22:43.911763 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:44.912151 kubelet[2416]: E0114 13:22:44.912089 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:45.912681 kubelet[2416]: E0114 13:22:45.912626 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:46.913697 kubelet[2416]: E0114 13:22:46.913637 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:47.914681 kubelet[2416]: E0114 13:22:47.914605 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:48.915993 kubelet[2416]: E0114 13:22:48.915864 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:49.916097 kubelet[2416]: E0114 13:22:49.916047 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:50.916696 kubelet[2416]: E0114 13:22:50.916642 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:51.917845 kubelet[2416]: E0114 13:22:51.917787 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:52.918810 kubelet[2416]: E0114 13:22:52.918746 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:53.919253 kubelet[2416]: E0114 13:22:53.919190 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:54.919957 kubelet[2416]: E0114 13:22:54.919902 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:55.920442 kubelet[2416]: E0114 13:22:55.920386 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:56.876757 kubelet[2416]: E0114 13:22:56.876698 2416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:56.902192 containerd[1714]: time="2025-01-14T13:22:56.902070045Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:56.902914 containerd[1714]: time="2025-01-14T13:22:56.902201047Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:56.902914 containerd[1714]: time="2025-01-14T13:22:56.902254048Z" level=info msg="StopPodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:56.902914 containerd[1714]: time="2025-01-14T13:22:56.902687857Z" level=info msg="RemovePodSandbox for \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:56.902914 containerd[1714]: time="2025-01-14T13:22:56.902719957Z" level=info msg="Forcibly stopping sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\"" Jan 14 13:22:56.902914 containerd[1714]: time="2025-01-14T13:22:56.902800959Z" level=info msg="TearDown network for sandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" successfully" Jan 14 13:22:56.912933 containerd[1714]: time="2025-01-14T13:22:56.912822954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.913209 containerd[1714]: time="2025-01-14T13:22:56.912951756Z" level=info msg="RemovePodSandbox \"6383bc3bcc48c19506522e9b1e4290c4eff372099d6c8dea6239c7c11635c36f\" returns successfully" Jan 14 13:22:56.913755 containerd[1714]: time="2025-01-14T13:22:56.913641870Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:56.914009 containerd[1714]: time="2025-01-14T13:22:56.913768172Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:56.914009 containerd[1714]: time="2025-01-14T13:22:56.913787473Z" level=info msg="StopPodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:56.914250 containerd[1714]: time="2025-01-14T13:22:56.914168080Z" level=info msg="RemovePodSandbox for \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:56.914330 containerd[1714]: time="2025-01-14T13:22:56.914231381Z" level=info msg="Forcibly stopping sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\"" Jan 14 13:22:56.914419 containerd[1714]: time="2025-01-14T13:22:56.914356984Z" level=info msg="TearDown network for sandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" successfully" Jan 14 13:22:56.921556 kubelet[2416]: E0114 13:22:56.921492 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:56.922037 containerd[1714]: time="2025-01-14T13:22:56.922009433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.922134 containerd[1714]: time="2025-01-14T13:22:56.922054134Z" level=info msg="RemovePodSandbox \"5ae0b190cda8f7c34d6c7deba519875c3a421376c96f47f5f6efe5f3d2696f27\" returns successfully" Jan 14 13:22:56.922369 containerd[1714]: time="2025-01-14T13:22:56.922344139Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:56.922456 containerd[1714]: time="2025-01-14T13:22:56.922433141Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:56.922456 containerd[1714]: time="2025-01-14T13:22:56.922448141Z" level=info msg="StopPodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:56.922821 containerd[1714]: time="2025-01-14T13:22:56.922751247Z" level=info msg="RemovePodSandbox for \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:56.922821 containerd[1714]: time="2025-01-14T13:22:56.922778848Z" level=info msg="Forcibly stopping sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\"" Jan 14 13:22:56.923001 containerd[1714]: time="2025-01-14T13:22:56.922857649Z" level=info msg="TearDown network for sandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" successfully" Jan 14 13:22:56.930270 containerd[1714]: time="2025-01-14T13:22:56.930169291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.930547 containerd[1714]: time="2025-01-14T13:22:56.930288494Z" level=info msg="RemovePodSandbox \"835245c00733621152c5cb6ba4231b2b7c44b1e910499d2dffe7df1211e07979\" returns successfully" Jan 14 13:22:56.930839 containerd[1714]: time="2025-01-14T13:22:56.930761303Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:56.930964 containerd[1714]: time="2025-01-14T13:22:56.930854505Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:56.930964 containerd[1714]: time="2025-01-14T13:22:56.930869605Z" level=info msg="StopPodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:56.931185 containerd[1714]: time="2025-01-14T13:22:56.931146210Z" level=info msg="RemovePodSandbox for \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:56.931185 containerd[1714]: time="2025-01-14T13:22:56.931171511Z" level=info msg="Forcibly stopping sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\"" Jan 14 13:22:56.931278 containerd[1714]: time="2025-01-14T13:22:56.931246112Z" level=info msg="TearDown network for sandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" successfully" Jan 14 13:22:56.938595 containerd[1714]: time="2025-01-14T13:22:56.938305950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.938595 containerd[1714]: time="2025-01-14T13:22:56.938354651Z" level=info msg="RemovePodSandbox \"2b64033eb1262349ec946628d2e1a020770f2d0e817db7b6537198ac3ab98930\" returns successfully" Jan 14 13:22:56.938949 containerd[1714]: time="2025-01-14T13:22:56.938901061Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:56.939063 containerd[1714]: time="2025-01-14T13:22:56.939034464Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:56.939063 containerd[1714]: time="2025-01-14T13:22:56.939056164Z" level=info msg="StopPodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:56.939357 containerd[1714]: time="2025-01-14T13:22:56.939335570Z" level=info msg="RemovePodSandbox for \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:56.939477 containerd[1714]: time="2025-01-14T13:22:56.939449172Z" level=info msg="Forcibly stopping sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\"" Jan 14 13:22:56.939572 containerd[1714]: time="2025-01-14T13:22:56.939530774Z" level=info msg="TearDown network for sandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" successfully" Jan 14 13:22:56.946871 containerd[1714]: time="2025-01-14T13:22:56.946844016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.946962 containerd[1714]: time="2025-01-14T13:22:56.946883117Z" level=info msg="RemovePodSandbox \"69c97a8de8d4c4e333faeeaffd44789ba1b8e80806ac465018f34b4fd8a77145\" returns successfully" Jan 14 13:22:56.947211 containerd[1714]: time="2025-01-14T13:22:56.947176922Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:56.947282 containerd[1714]: time="2025-01-14T13:22:56.947264224Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:56.947324 containerd[1714]: time="2025-01-14T13:22:56.947278724Z" level=info msg="StopPodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:56.947690 containerd[1714]: time="2025-01-14T13:22:56.947590930Z" level=info msg="RemovePodSandbox for \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:56.947690 containerd[1714]: time="2025-01-14T13:22:56.947631231Z" level=info msg="Forcibly stopping sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\"" Jan 14 13:22:56.947838 containerd[1714]: time="2025-01-14T13:22:56.947709033Z" level=info msg="TearDown network for sandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" successfully" Jan 14 13:22:56.953776 containerd[1714]: time="2025-01-14T13:22:56.953737050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.953860 containerd[1714]: time="2025-01-14T13:22:56.953781751Z" level=info msg="RemovePodSandbox \"7c6fefc80499fca5b10bfdb2077859837d4a794481431f9f0a90fa376300d977\" returns successfully" Jan 14 13:22:56.954184 containerd[1714]: time="2025-01-14T13:22:56.954141758Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:56.954299 containerd[1714]: time="2025-01-14T13:22:56.954238760Z" level=info msg="TearDown network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" successfully" Jan 14 13:22:56.954365 containerd[1714]: time="2025-01-14T13:22:56.954294861Z" level=info msg="StopPodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" returns successfully" Jan 14 13:22:56.954578 containerd[1714]: time="2025-01-14T13:22:56.954553066Z" level=info msg="RemovePodSandbox for \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:56.954656 containerd[1714]: time="2025-01-14T13:22:56.954585266Z" level=info msg="Forcibly stopping sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\"" Jan 14 13:22:56.954716 containerd[1714]: time="2025-01-14T13:22:56.954679068Z" level=info msg="TearDown network for sandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" successfully" Jan 14 13:22:56.960425 containerd[1714]: time="2025-01-14T13:22:56.960398880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.960544 containerd[1714]: time="2025-01-14T13:22:56.960436880Z" level=info msg="RemovePodSandbox \"f20bb5d66d0d799707722983809b757e326a321184cee664c0612c23a875df90\" returns successfully" Jan 14 13:22:56.960760 containerd[1714]: time="2025-01-14T13:22:56.960734686Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" Jan 14 13:22:56.960872 containerd[1714]: time="2025-01-14T13:22:56.960825588Z" level=info msg="TearDown network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" successfully" Jan 14 13:22:56.960872 containerd[1714]: time="2025-01-14T13:22:56.960843488Z" level=info msg="StopPodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" returns successfully" Jan 14 13:22:56.961113 containerd[1714]: time="2025-01-14T13:22:56.961094193Z" level=info msg="RemovePodSandbox for \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" Jan 14 13:22:56.961180 containerd[1714]: time="2025-01-14T13:22:56.961119994Z" level=info msg="Forcibly stopping sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\"" Jan 14 13:22:56.961322 containerd[1714]: time="2025-01-14T13:22:56.961227096Z" level=info msg="TearDown network for sandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" successfully" Jan 14 13:22:56.970307 containerd[1714]: time="2025-01-14T13:22:56.970119769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.970307 containerd[1714]: time="2025-01-14T13:22:56.970232271Z" level=info msg="RemovePodSandbox \"17adf99fd45314ddfff514ca856e9ef87a61b8dee624bd84e65e81519bf6109b\" returns successfully" Jan 14 13:22:56.970765 containerd[1714]: time="2025-01-14T13:22:56.970744881Z" level=info msg="StopPodSandbox for \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\"" Jan 14 13:22:56.970850 containerd[1714]: time="2025-01-14T13:22:56.970833183Z" level=info msg="TearDown network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" successfully" Jan 14 13:22:56.970899 containerd[1714]: time="2025-01-14T13:22:56.970847583Z" level=info msg="StopPodSandbox for \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" returns successfully" Jan 14 13:22:56.971126 containerd[1714]: time="2025-01-14T13:22:56.971098388Z" level=info msg="RemovePodSandbox for \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\"" Jan 14 13:22:56.971126 containerd[1714]: time="2025-01-14T13:22:56.971122688Z" level=info msg="Forcibly stopping sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\"" Jan 14 13:22:56.971237 containerd[1714]: time="2025-01-14T13:22:56.971194290Z" level=info msg="TearDown network for sandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" successfully" Jan 14 13:22:56.978821 containerd[1714]: time="2025-01-14T13:22:56.978697636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.979085 containerd[1714]: time="2025-01-14T13:22:56.978813538Z" level=info msg="RemovePodSandbox \"da645a185986d19a66290e0d97d5c6fb69e7cc71f21dfbcee7bf99426a50e0ee\" returns successfully" Jan 14 13:22:56.979507 containerd[1714]: time="2025-01-14T13:22:56.979407549Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:56.979507 containerd[1714]: time="2025-01-14T13:22:56.979490751Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:56.979507 containerd[1714]: time="2025-01-14T13:22:56.979505251Z" level=info msg="StopPodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:56.979836 containerd[1714]: time="2025-01-14T13:22:56.979812157Z" level=info msg="RemovePodSandbox for \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:56.979912 containerd[1714]: time="2025-01-14T13:22:56.979842058Z" level=info msg="Forcibly stopping sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\"" Jan 14 13:22:56.979964 containerd[1714]: time="2025-01-14T13:22:56.979908759Z" level=info msg="TearDown network for sandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" successfully" Jan 14 13:22:56.986168 containerd[1714]: time="2025-01-14T13:22:56.986074279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.986168 containerd[1714]: time="2025-01-14T13:22:56.986114180Z" level=info msg="RemovePodSandbox \"d7bb7130b04fcdc1d8dc3593b39a423a377d4f405f7f55367d3362f6163d9c00\" returns successfully" Jan 14 13:22:56.986510 containerd[1714]: time="2025-01-14T13:22:56.986485987Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:56.986585 containerd[1714]: time="2025-01-14T13:22:56.986568089Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:56.986644 containerd[1714]: time="2025-01-14T13:22:56.986582289Z" level=info msg="StopPodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:56.987088 containerd[1714]: time="2025-01-14T13:22:56.987020898Z" level=info msg="RemovePodSandbox for \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:56.987088 containerd[1714]: time="2025-01-14T13:22:56.987046198Z" level=info msg="Forcibly stopping sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\"" Jan 14 13:22:56.987206 containerd[1714]: time="2025-01-14T13:22:56.987116799Z" level=info msg="TearDown network for sandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" successfully" Jan 14 13:22:56.994342 containerd[1714]: time="2025-01-14T13:22:56.994316440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:56.994424 containerd[1714]: time="2025-01-14T13:22:56.994353940Z" level=info msg="RemovePodSandbox \"fa6d2a1ac5726135150d369cb4b42081e6582ca267b0140e4630118f915b5522\" returns successfully" Jan 14 13:22:56.994656 containerd[1714]: time="2025-01-14T13:22:56.994629746Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:56.994793 containerd[1714]: time="2025-01-14T13:22:56.994738848Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:56.994793 containerd[1714]: time="2025-01-14T13:22:56.994757048Z" level=info msg="StopPodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:56.995112 containerd[1714]: time="2025-01-14T13:22:56.995034053Z" level=info msg="RemovePodSandbox for \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:56.995112 containerd[1714]: time="2025-01-14T13:22:56.995060754Z" level=info msg="Forcibly stopping sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\"" Jan 14 13:22:56.995238 containerd[1714]: time="2025-01-14T13:22:56.995139156Z" level=info msg="TearDown network for sandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" successfully" Jan 14 13:22:57.002188 containerd[1714]: time="2025-01-14T13:22:57.002065590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:57.002188 containerd[1714]: time="2025-01-14T13:22:57.002166992Z" level=info msg="RemovePodSandbox \"3efb98b3c8cb7859f9858b18969240a93177349c58f29a03a9767f7c142b4166\" returns successfully" Jan 14 13:22:57.002589 containerd[1714]: time="2025-01-14T13:22:57.002487999Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:57.002589 containerd[1714]: time="2025-01-14T13:22:57.002582600Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:57.002753 containerd[1714]: time="2025-01-14T13:22:57.002598201Z" level=info msg="StopPodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:57.002888 containerd[1714]: time="2025-01-14T13:22:57.002864406Z" level=info msg="RemovePodSandbox for \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:57.002946 containerd[1714]: time="2025-01-14T13:22:57.002896806Z" level=info msg="Forcibly stopping sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\"" Jan 14 13:22:57.003050 containerd[1714]: time="2025-01-14T13:22:57.002966108Z" level=info msg="TearDown network for sandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" successfully" Jan 14 13:22:57.009471 containerd[1714]: time="2025-01-14T13:22:57.009442034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:57.009551 containerd[1714]: time="2025-01-14T13:22:57.009481535Z" level=info msg="RemovePodSandbox \"cb6d490621f28117005fc8ab51aea1ef2d9b40bd97f80fb7faf4e3572b31fe87\" returns successfully" Jan 14 13:22:57.009861 containerd[1714]: time="2025-01-14T13:22:57.009770140Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:57.009937 containerd[1714]: time="2025-01-14T13:22:57.009863642Z" level=info msg="TearDown network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" successfully" Jan 14 13:22:57.009937 containerd[1714]: time="2025-01-14T13:22:57.009879042Z" level=info msg="StopPodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" returns successfully" Jan 14 13:22:57.010159 containerd[1714]: time="2025-01-14T13:22:57.010134347Z" level=info msg="RemovePodSandbox for \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:57.010218 containerd[1714]: time="2025-01-14T13:22:57.010166848Z" level=info msg="Forcibly stopping sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\"" Jan 14 13:22:57.010274 containerd[1714]: time="2025-01-14T13:22:57.010238549Z" level=info msg="TearDown network for sandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" successfully" Jan 14 13:22:57.016863 containerd[1714]: time="2025-01-14T13:22:57.016835878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:57.016956 containerd[1714]: time="2025-01-14T13:22:57.016872578Z" level=info msg="RemovePodSandbox \"c1c459fa72427c57eefe24e9c8c091b3e7f94cea35ebd91b9a4570d58c2f2abf\" returns successfully" Jan 14 13:22:57.017199 containerd[1714]: time="2025-01-14T13:22:57.017151884Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" Jan 14 13:22:57.017257 containerd[1714]: time="2025-01-14T13:22:57.017238586Z" level=info msg="TearDown network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" successfully" Jan 14 13:22:57.017317 containerd[1714]: time="2025-01-14T13:22:57.017253586Z" level=info msg="StopPodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" returns successfully" Jan 14 13:22:57.017526 containerd[1714]: time="2025-01-14T13:22:57.017499491Z" level=info msg="RemovePodSandbox for \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" Jan 14 13:22:57.017589 containerd[1714]: time="2025-01-14T13:22:57.017526691Z" level=info msg="Forcibly stopping sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\"" Jan 14 13:22:57.017753 containerd[1714]: time="2025-01-14T13:22:57.017599593Z" level=info msg="TearDown network for sandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" successfully" Jan 14 13:22:57.025431 containerd[1714]: time="2025-01-14T13:22:57.025396144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:57.025515 containerd[1714]: time="2025-01-14T13:22:57.025433545Z" level=info msg="RemovePodSandbox \"50a7232c4dc424c60b774ccb05d2e4be283635f4c1f87031477df43bd8ed28e9\" returns successfully" Jan 14 13:22:57.025824 containerd[1714]: time="2025-01-14T13:22:57.025730551Z" level=info msg="StopPodSandbox for \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\"" Jan 14 13:22:57.025906 containerd[1714]: time="2025-01-14T13:22:57.025826053Z" level=info msg="TearDown network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" successfully" Jan 14 13:22:57.025906 containerd[1714]: time="2025-01-14T13:22:57.025841053Z" level=info msg="StopPodSandbox for \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" returns successfully" Jan 14 13:22:57.026156 containerd[1714]: time="2025-01-14T13:22:57.026126358Z" level=info msg="RemovePodSandbox for \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\"" Jan 14 13:22:57.026210 containerd[1714]: time="2025-01-14T13:22:57.026157059Z" level=info msg="Forcibly stopping sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\"" Jan 14 13:22:57.026257 containerd[1714]: time="2025-01-14T13:22:57.026227360Z" level=info msg="TearDown network for sandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" successfully" Jan 14 13:22:57.032315 containerd[1714]: time="2025-01-14T13:22:57.032289478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:22:57.032451 containerd[1714]: time="2025-01-14T13:22:57.032324479Z" level=info msg="RemovePodSandbox \"6dea37a9494fa233563500de123988362b5e54a093d6d19bea70c2b8d7da35d7\" returns successfully" Jan 14 13:22:57.922044 kubelet[2416]: E0114 13:22:57.921998 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:58.922796 kubelet[2416]: E0114 13:22:58.922728 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:59.923755 kubelet[2416]: E0114 13:22:59.923702 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:00.924154 kubelet[2416]: E0114 13:23:00.924087 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:01.924779 kubelet[2416]: E0114 13:23:01.924721 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:02.925152 kubelet[2416]: E0114 13:23:02.925098 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:03.926139 kubelet[2416]: E0114 13:23:03.926079 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:04.926786 kubelet[2416]: E0114 13:23:04.926722 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:05.927904 kubelet[2416]: E0114 13:23:05.927846 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:05.985521 kubelet[2416]: I0114 13:23:05.985476 2416 topology_manager.go:215] "Topology Admit Handler" podUID="6f26c3e9-e670-4f4c-abcd-ebe2939dafdf" podNamespace="default" podName="test-pod-1" Jan 14 13:23:05.991139 systemd[1]: Created slice kubepods-besteffort-pod6f26c3e9_e670_4f4c_abcd_ebe2939dafdf.slice - libcontainer container kubepods-besteffort-pod6f26c3e9_e670_4f4c_abcd_ebe2939dafdf.slice. Jan 14 13:23:06.101832 kubelet[2416]: I0114 13:23:06.101787 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7928feb-3221-4a88-a162-d51b02790ad6\" (UniqueName: \"kubernetes.io/nfs/6f26c3e9-e670-4f4c-abcd-ebe2939dafdf-pvc-c7928feb-3221-4a88-a162-d51b02790ad6\") pod \"test-pod-1\" (UID: \"6f26c3e9-e670-4f4c-abcd-ebe2939dafdf\") " pod="default/test-pod-1" Jan 14 13:23:06.102072 kubelet[2416]: I0114 13:23:06.101849 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdss4\" (UniqueName: \"kubernetes.io/projected/6f26c3e9-e670-4f4c-abcd-ebe2939dafdf-kube-api-access-zdss4\") pod \"test-pod-1\" (UID: \"6f26c3e9-e670-4f4c-abcd-ebe2939dafdf\") " pod="default/test-pod-1" Jan 14 13:23:06.302646 kernel: FS-Cache: Loaded Jan 14 13:23:06.406803 kernel: RPC: Registered named UNIX socket transport module. Jan 14 13:23:06.406942 kernel: RPC: Registered udp transport module. Jan 14 13:23:06.406968 kernel: RPC: Registered tcp transport module. Jan 14 13:23:06.409882 kernel: RPC: Registered tcp-with-tls transport module. Jan 14 13:23:06.409962 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 14 13:23:06.773221 kernel: NFS: Registering the id_resolver key type Jan 14 13:23:06.773345 kernel: Key type id_resolver registered Jan 14 13:23:06.773369 kernel: Key type id_legacy registered Jan 14 13:23:06.868804 nfsidmap[4390]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-950c255954' Jan 14 13:23:06.882021 nfsidmap[4391]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-950c255954' Jan 14 13:23:06.928330 kubelet[2416]: E0114 13:23:06.928285 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:07.194402 containerd[1714]: time="2025-01-14T13:23:07.194246233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6f26c3e9-e670-4f4c-abcd-ebe2939dafdf,Namespace:default,Attempt:0,}" Jan 14 13:23:07.338191 systemd-networkd[1456]: cali5ec59c6bf6e: Link UP Jan 14 13:23:07.338970 systemd-networkd[1456]: cali5ec59c6bf6e: Gained carrier Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.263 [INFO][4393] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.31-k8s-test--pod--1-eth0 default 6f26c3e9-e670-4f4c-abcd-ebe2939dafdf 1460 0 2025-01-14 13:22:32 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.31 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.263 [INFO][4393] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.292 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" HandleID="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Workload="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.305 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" HandleID="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Workload="10.200.4.31-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cc0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.31", "pod":"test-pod-1", "timestamp":"2025-01-14 13:23:07.292792703 +0000 UTC"}, Hostname:"10.200.4.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.305 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.305 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.305 [INFO][4405] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.31' Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.306 [INFO][4405] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.310 [INFO][4405] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.313 [INFO][4405] ipam/ipam.go 489: Trying affinity for 192.168.22.0/26 host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.315 [INFO][4405] ipam/ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.317 [INFO][4405] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.317 [INFO][4405] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.318 [INFO][4405] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914 Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.325 [INFO][4405] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.332 [INFO][4405] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.22.4/26] block=192.168.22.0/26 handle="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.332 [INFO][4405] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.4/26] handle="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" host="10.200.4.31" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.332 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.332 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.4/26] IPv6=[] ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" HandleID="k8s-pod-network.8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Workload="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.350119 containerd[1714]: 2025-01-14 13:23:07.334 [INFO][4393] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6f26c3e9-e670-4f4c-abcd-ebe2939dafdf", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:07.353821 containerd[1714]: 2025-01-14 13:23:07.334 [INFO][4393] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.22.4/32] ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.353821 containerd[1714]: 2025-01-14 13:23:07.334 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.353821 containerd[1714]: 2025-01-14 13:23:07.340 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.353821 containerd[1714]: 2025-01-14 13:23:07.340 [INFO][4393] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6f26c3e9-e670-4f4c-abcd-ebe2939dafdf", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.31", ContainerID:"8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ae:5e:bd:6c:86:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:07.353821 containerd[1714]: 2025-01-14 13:23:07.348 [INFO][4393] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.31-k8s-test--pod--1-eth0" Jan 14 13:23:07.379269 containerd[1714]: time="2025-01-14T13:23:07.378895195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:07.379269 containerd[1714]: time="2025-01-14T13:23:07.378971599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:07.379269 containerd[1714]: time="2025-01-14T13:23:07.378987799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:07.379269 containerd[1714]: time="2025-01-14T13:23:07.379076003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:07.406771 systemd[1]: Started cri-containerd-8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914.scope - libcontainer container 8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914. Jan 14 13:23:07.445496 containerd[1714]: time="2025-01-14T13:23:07.445171468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6f26c3e9-e670-4f4c-abcd-ebe2939dafdf,Namespace:default,Attempt:0,} returns sandbox id \"8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914\"" Jan 14 13:23:07.447173 containerd[1714]: time="2025-01-14T13:23:07.447095857Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:23:07.928935 kubelet[2416]: E0114 13:23:07.928867 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:08.063123 containerd[1714]: time="2025-01-14T13:23:08.063064619Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:08.066861 containerd[1714]: time="2025-01-14T13:23:08.065583936Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 14 13:23:08.068188 containerd[1714]: time="2025-01-14T13:23:08.068151355Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 621.018196ms" Jan 14 13:23:08.068188 containerd[1714]: time="2025-01-14T13:23:08.068186857Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:23:08.072381 containerd[1714]: time="2025-01-14T13:23:08.072203343Z" level=info msg="CreateContainer within sandbox \"8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 14 13:23:08.103165 containerd[1714]: time="2025-01-14T13:23:08.103073475Z" level=info msg="CreateContainer within sandbox \"8071260d0ee718f857f7f927d0b098c06c204c1135a7f6c4ab7b0143eb336914\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5f9cb96d72e2dacf64064495b184a93ce23beea01118185a969767f41356f9d8\"" Jan 14 13:23:08.103925 containerd[1714]: time="2025-01-14T13:23:08.103694303Z" level=info msg="StartContainer for \"5f9cb96d72e2dacf64064495b184a93ce23beea01118185a969767f41356f9d8\"" Jan 14 13:23:08.137788 systemd[1]: Started cri-containerd-5f9cb96d72e2dacf64064495b184a93ce23beea01118185a969767f41356f9d8.scope - libcontainer container 5f9cb96d72e2dacf64064495b184a93ce23beea01118185a969767f41356f9d8. Jan 14 13:23:08.165971 containerd[1714]: time="2025-01-14T13:23:08.165252158Z" level=info msg="StartContainer for \"5f9cb96d72e2dacf64064495b184a93ce23beea01118185a969767f41356f9d8\" returns successfully" Jan 14 13:23:08.287897 kubelet[2416]: I0114 13:23:08.287854 2416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=35.666153114 podStartE2EDuration="36.287818641s" podCreationTimestamp="2025-01-14 13:22:32 +0000 UTC" firstStartedPulling="2025-01-14 13:23:07.446792443 +0000 UTC m=+71.638464753" lastFinishedPulling="2025-01-14 13:23:08.06845807 +0000 UTC m=+72.260130280" observedRunningTime="2025-01-14 13:23:08.287553929 +0000 UTC m=+72.479226139" watchObservedRunningTime="2025-01-14 13:23:08.287818641 +0000 UTC m=+72.479490951" Jan 14 13:23:08.636864 systemd-networkd[1456]: cali5ec59c6bf6e: Gained IPv6LL Jan 14 13:23:08.932725 kubelet[2416]: E0114 13:23:08.930699 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:09.931287 kubelet[2416]: E0114 13:23:09.931226 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:10.932258 kubelet[2416]: E0114 13:23:10.932201 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:11.932845 kubelet[2416]: E0114 13:23:11.932794 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:12.933934 kubelet[2416]: E0114 13:23:12.933878 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:13.935071 kubelet[2416]: E0114 13:23:13.934979 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:14.935700 kubelet[2416]: E0114 13:23:14.935640 2416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"