Jan 14 13:21:20.115958 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:21:20.115997 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.116013 kernel: BIOS-provided physical RAM map: Jan 14 13:21:20.116024 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:21:20.116035 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:21:20.116046 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:21:20.116059 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:21:20.116074 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:21:20.116086 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:21:20.116098 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:21:20.116110 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:21:20.116122 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:21:20.116132 kernel: NX (Execute Disable) protection: active Jan 14 13:21:20.116145 kernel: APIC: Static calls initialized Jan 14 13:21:20.116162 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:21:20.116176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:21:20.116205 kernel: random: crng init done Jan 14 13:21:20.116218 kernel: secureboot: Secure boot disabled Jan 14 13:21:20.116231 kernel: SMBIOS 3.1.0 present. Jan 14 13:21:20.116244 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:21:20.116257 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:21:20.116270 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:21:20.116283 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:21:20.116296 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:21:20.116312 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:21:20.116325 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:21:20.116338 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.116351 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.116365 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:21:20.116379 kernel: tsc: Detected 2593.907 MHz processor Jan 14 13:21:20.116392 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:21:20.116406 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:21:20.116419 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.116436 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:21:20.116448 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:21:20.116461 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:21:20.116474 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.116488 kernel: Using GB pages for direct mapping Jan 14 13:21:20.116501 kernel: ACPI: Early table checksum verification disabled Jan 14 13:21:20.116514 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:21:20.116534 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116551 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116564 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:21:20.116578 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:21:20.116592 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116606 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116621 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116638 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116653 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116667 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116681 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116695 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:21:20.116709 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:21:20.116724 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:21:20.116739 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:21:20.116753 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:21:20.116770 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:21:20.116784 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:21:20.116798 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:21:20.116812 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:21:20.116826 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:21:20.116840 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:21:20.116854 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:21:20.116869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:21:20.116885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:21:20.116900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:21:20.116914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:21:20.116928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:21:20.116942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:21:20.116956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:21:20.116970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:21:20.116984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:21:20.116998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:21:20.117015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:21:20.117028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:21:20.117042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:21:20.117056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:21:20.117070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:21:20.117084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:21:20.117101 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:21:20.117116 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:21:20.117130 kernel: Zone ranges: Jan 14 13:21:20.117147 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:21:20.117160 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:21:20.117175 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.117248 kernel: Movable zone start for each node Jan 14 13:21:20.117261 kernel: Early memory node ranges Jan 14 13:21:20.117275 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:21:20.117290 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:21:20.117304 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:21:20.117318 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.117335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:21:20.117350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:21:20.117365 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:21:20.117379 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:21:20.117392 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:21:20.117406 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:21:20.117420 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:21:20.117434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:21:20.117448 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:21:20.117465 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:21:20.117479 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:21:20.117492 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:21:20.117503 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:21:20.117515 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:21:20.117529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:21:20.117544 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:21:20.117560 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:21:20.117575 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:21:20.117598 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:21:20.117615 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:21:20.117634 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.117653 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:21:20.117667 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:21:20.117681 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:21:20.117694 kernel: Fallback order for Node 0: 0 Jan 14 13:21:20.117707 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:21:20.117724 kernel: Policy zone: Normal Jan 14 13:21:20.117748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:21:20.117762 kernel: software IO TLB: area num 2. Jan 14 13:21:20.117780 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:21:20.117794 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:21:20.117808 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:21:20.117822 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:21:20.117836 kernel: Dynamic Preempt: voluntary Jan 14 13:21:20.117850 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:21:20.117865 kernel: rcu: RCU event tracing is enabled. Jan 14 13:21:20.117879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:21:20.117897 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:21:20.117911 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:21:20.117925 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:21:20.117939 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:21:20.117953 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:21:20.117970 kernel: Using NULL legacy PIC Jan 14 13:21:20.117983 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:21:20.117997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:21:20.118011 kernel: Console: colour dummy device 80x25 Jan 14 13:21:20.118025 kernel: printk: console [tty1] enabled Jan 14 13:21:20.118039 kernel: printk: console [ttyS0] enabled Jan 14 13:21:20.118053 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:21:20.118067 kernel: ACPI: Core revision 20230628 Jan 14 13:21:20.118081 kernel: Failed to register legacy timer interrupt Jan 14 13:21:20.118095 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:21:20.118112 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:21:20.118126 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:21:20.118140 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:21:20.118154 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:21:20.118168 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:21:20.118181 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:21:20.118208 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:21:20.118222 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:21:20.118236 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 14 13:21:20.118254 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:21:20.118268 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:21:20.118283 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:21:20.118297 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:21:20.118310 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:21:20.118324 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:21:20.118338 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:21:20.118352 kernel: RETBleed: Vulnerable Jan 14 13:21:20.118366 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:21:20.118379 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.118396 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.118410 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:21:20.118423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:21:20.118437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:21:20.118451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:21:20.118465 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:21:20.118479 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:21:20.118492 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:21:20.118506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:21:20.118520 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:21:20.118534 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:21:20.118550 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:21:20.118564 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:21:20.118578 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:21:20.118592 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:21:20.118606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:21:20.118619 kernel: landlock: Up and running. Jan 14 13:21:20.118633 kernel: SELinux: Initializing. Jan 14 13:21:20.118647 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.118661 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.118675 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:21:20.118689 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118706 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118721 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118735 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:21:20.118749 kernel: signal: max sigframe size: 3632 Jan 14 13:21:20.118762 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:21:20.118776 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:21:20.118790 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:21:20.118804 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:21:20.118818 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:21:20.118835 kernel: .... node #0, CPUs: #1 Jan 14 13:21:20.118849 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:21:20.118864 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:21:20.118878 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:21:20.118892 kernel: smpboot: Max logical packages: 1 Jan 14 13:21:20.118906 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:21:20.118920 kernel: devtmpfs: initialized Jan 14 13:21:20.118934 kernel: x86/mm: Memory block size: 128MB Jan 14 13:21:20.118951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:21:20.118965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:21:20.118979 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:21:20.118993 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:21:20.119008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:21:20.119021 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:21:20.119035 kernel: audit: type=2000 audit(1736860878.029:1): state=initialized audit_enabled=0 res=1 Jan 14 13:21:20.119049 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:21:20.119063 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:21:20.119080 kernel: cpuidle: using governor menu Jan 14 13:21:20.119094 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:21:20.119107 kernel: dca service started, version 1.12.1 Jan 14 13:21:20.119121 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:21:20.119135 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:21:20.119149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:21:20.119164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:21:20.119178 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:21:20.124823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:21:20.124853 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:21:20.124867 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:21:20.124881 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:21:20.124894 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:21:20.124908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:21:20.124922 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:21:20.124936 kernel: ACPI: Interpreter enabled Jan 14 13:21:20.124951 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:21:20.124967 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:21:20.124983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:21:20.124996 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:21:20.125010 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:21:20.125024 kernel: iommu: Default domain type: Translated Jan 14 13:21:20.125038 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:21:20.125054 kernel: efivars: Registered efivars operations Jan 14 13:21:20.125069 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:21:20.125084 kernel: PCI: System does not support PCI Jan 14 13:21:20.125096 kernel: vgaarb: loaded Jan 14 13:21:20.125113 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:21:20.125125 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:21:20.125137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:21:20.125151 kernel: pnp: PnP ACPI init Jan 14 13:21:20.125166 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:21:20.125181 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:21:20.125207 kernel: NET: Registered PF_INET protocol family Jan 14 13:21:20.125228 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:21:20.125241 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:21:20.129117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:21:20.129142 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:21:20.129160 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:21:20.129174 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:21:20.129203 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.129219 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.129232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:21:20.129245 kernel: NET: Registered PF_XDP protocol family Jan 14 13:21:20.129259 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:21:20.129279 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:21:20.129293 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:21:20.129306 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:21:20.129320 kernel: Initialise system trusted keyrings Jan 14 13:21:20.129333 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:21:20.129347 kernel: Key type asymmetric registered Jan 14 13:21:20.129360 kernel: Asymmetric key parser 'x509' registered Jan 14 13:21:20.129373 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:21:20.129388 kernel: io scheduler mq-deadline registered Jan 14 13:21:20.129405 kernel: io scheduler kyber registered Jan 14 13:21:20.129418 kernel: io scheduler bfq registered Jan 14 13:21:20.129433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:21:20.129447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:21:20.129461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:21:20.129475 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:21:20.129489 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:21:20.129699 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:21:20.129821 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:21:19 UTC (1736860879) Jan 14 13:21:20.129932 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:21:20.129949 kernel: intel_pstate: CPU model not supported Jan 14 13:21:20.129962 kernel: efifb: probing for efifb Jan 14 13:21:20.129976 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:21:20.129990 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:21:20.130004 kernel: efifb: scrolling: redraw Jan 14 13:21:20.130017 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:21:20.130031 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:20.130048 kernel: fb0: EFI VGA frame buffer device Jan 14 13:21:20.130062 kernel: pstore: Using crash dump compression: deflate Jan 14 13:21:20.130076 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:21:20.130090 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:21:20.130104 kernel: Segment Routing with IPv6 Jan 14 13:21:20.130117 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:21:20.130131 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:21:20.130145 kernel: Key type dns_resolver registered Jan 14 13:21:20.130159 kernel: IPI shorthand broadcast: enabled Jan 14 13:21:20.130177 kernel: sched_clock: Marking stable (962003600, 56487400)->(1273384300, -254893300) Jan 14 13:21:20.130218 kernel: registered taskstats version 1 Jan 14 13:21:20.130232 kernel: Loading compiled-in X.509 certificates Jan 14 13:21:20.130246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:21:20.130260 kernel: Key type .fscrypt registered Jan 14 13:21:20.130273 kernel: Key type fscrypt-provisioning registered Jan 14 13:21:20.130287 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:21:20.130301 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:21:20.130318 kernel: ima: No architecture policies found Jan 14 13:21:20.130332 kernel: clk: Disabling unused clocks Jan 14 13:21:20.130346 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:21:20.130360 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:21:20.130373 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:21:20.130387 kernel: Run /init as init process Jan 14 13:21:20.130400 kernel: with arguments: Jan 14 13:21:20.130414 kernel: /init Jan 14 13:21:20.130427 kernel: with environment: Jan 14 13:21:20.130440 kernel: HOME=/ Jan 14 13:21:20.130455 kernel: TERM=linux Jan 14 13:21:20.130469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:21:20.130486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:20.130502 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:20.130516 systemd[1]: Detected architecture x86-64. Jan 14 13:21:20.130530 systemd[1]: Running in initrd. Jan 14 13:21:20.130544 systemd[1]: No hostname configured, using default hostname. Jan 14 13:21:20.130560 systemd[1]: Hostname set to . Jan 14 13:21:20.130575 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:20.130589 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:21:20.130604 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:20.130618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:20.130634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:21:20.130649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:20.130663 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:21:20.130680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:21:20.130696 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:21:20.130711 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:21:20.130725 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:20.130739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:20.130753 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:20.130768 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:20.130785 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:20.130799 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:20.130813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:20.130828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:20.130842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:21:20.130856 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:21:20.130871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:20.130885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:20.130900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:20.130917 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:20.130931 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:21:20.130945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:20.130959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:21:20.130973 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:21:20.130988 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:20.131002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:20.131017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:20.131060 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:21:20.131093 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:20.131108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:20.131123 systemd-journald[177]: Journal started Jan 14 13:21:20.131160 systemd-journald[177]: Runtime Journal (/run/log/journal/cbd3cb8f6df74f63a66d486c82e7985e) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:20.104143 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:21:20.140232 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:20.143674 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:21:20.150105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:20.162074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:21:20.162273 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:20.170145 kernel: Bridge firewalling registered Jan 14 13:21:20.170484 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:21:20.172428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:20.173422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:20.187746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:20.192542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:20.196226 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:20.200969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:20.201968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:20.226447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:20.231406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:20.244731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:20.253373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:21:20.270370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:20.276597 dracut-cmdline[214]: dracut-dracut-053 Jan 14 13:21:20.276597 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.327025 systemd-resolved[215]: Positive Trust Anchors: Jan 14 13:21:20.330123 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:20.330203 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:20.355983 systemd-resolved[215]: Defaulting to hostname 'linux'. Jan 14 13:21:20.359596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:20.365426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:20.377209 kernel: SCSI subsystem initialized Jan 14 13:21:20.387204 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:21:20.398209 kernel: iscsi: registered transport (tcp) Jan 14 13:21:20.420762 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:21:20.420864 kernel: QLogic iSCSI HBA Driver Jan 14 13:21:20.457144 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:20.466383 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:21:20.495501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:21:20.495606 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:21:20.500214 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:21:20.540219 kernel: raid6: avx512x4 gen() 18021 MB/s Jan 14 13:21:20.559201 kernel: raid6: avx512x2 gen() 18382 MB/s Jan 14 13:21:20.578198 kernel: raid6: avx512x1 gen() 18162 MB/s Jan 14 13:21:20.598203 kernel: raid6: avx2x4 gen() 18389 MB/s Jan 14 13:21:20.617201 kernel: raid6: avx2x2 gen() 18322 MB/s Jan 14 13:21:20.637239 kernel: raid6: avx2x1 gen() 13265 MB/s Jan 14 13:21:20.637286 kernel: raid6: using algorithm avx2x4 gen() 18389 MB/s Jan 14 13:21:20.658715 kernel: raid6: .... xor() 6547 MB/s, rmw enabled Jan 14 13:21:20.658760 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:21:20.682216 kernel: xor: automatically using best checksumming function avx Jan 14 13:21:20.829217 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:21:20.838836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:20.848473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:20.861476 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 14 13:21:20.868095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:20.885356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:21:20.898759 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 14 13:21:20.925623 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:20.933458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:20.972732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:20.989847 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:21:21.023034 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:21.030105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:21.036742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:21.042689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:21.053374 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:21:21.067451 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:21:21.071936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:21.075301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.082159 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.085320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.085496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.088654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.112370 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:21:21.106640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.109807 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:21.120988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.122587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.133653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.158212 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:21:21.158270 kernel: AES CTR mode by8 optimization enabled Jan 14 13:21:21.163389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.174362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.190499 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:21:21.195339 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:21:21.195386 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:21:21.204200 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:21:21.215433 kernel: PTP clock support registered Jan 14 13:21:21.215484 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:21:21.221235 kernel: scsi host1: storvsc_host_t Jan 14 13:21:21.221282 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:21:21.227564 kernel: scsi host0: storvsc_host_t Jan 14 13:21:21.227645 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:21:21.233376 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:21:21.234905 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:21:21.240558 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:21:21.240600 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:21:21.681811 systemd-resolved[215]: Clock change detected. Flushing caches. Jan 14 13:21:21.687149 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:21:21.692416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.694589 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:21:21.694616 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:21:21.711846 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:21:21.720842 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:21:21.727793 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:21:21.743308 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:21:21.750474 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:21:21.761323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:21:21.761346 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:21:21.761528 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:21:21.761704 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:21:21.761910 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:21:21.762077 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:21:21.762242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:21.762262 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:21:21.926431 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: VF slot 1 added Jan 14 13:21:21.934841 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:21:21.938790 kernel: hv_pci e597a863-7a61-490b-b082-a1f8275076e8: PCI VMBus probing: Using version 0x10004 Jan 14 13:21:21.998446 kernel: hv_pci e597a863-7a61-490b-b082-a1f8275076e8: PCI host bridge to bus 7a61:00 Jan 14 13:21:21.998705 kernel: pci_bus 7a61:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:21:21.998924 kernel: pci_bus 7a61:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:21:21.999078 kernel: pci 7a61:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:21:21.999287 kernel: pci 7a61:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:21.999462 kernel: pci 7a61:00:02.0: enabling Extended Tags Jan 14 13:21:21.999641 kernel: pci 7a61:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7a61:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:21:21.999837 kernel: pci_bus 7a61:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:21:21.999986 kernel: pci 7a61:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.178255 kernel: mlx5_core 7a61:00:02.0: enabling device (0000 -> 0002) Jan 14 13:21:22.423105 kernel: mlx5_core 7a61:00:02.0: firmware version: 14.30.5000 Jan 14 13:21:22.423887 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: VF registering: eth1 Jan 14 13:21:22.424077 kernel: mlx5_core 7a61:00:02.0 eth1: joined to eth0 Jan 14 13:21:22.424312 kernel: mlx5_core 7a61:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:21:22.349630 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:21:22.438819 kernel: mlx5_core 7a61:00:02.0 enP31329s1: renamed from eth1 Jan 14 13:21:22.449316 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 14 13:21:22.476523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:21:22.492070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:22.521791 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (450) Jan 14 13:21:22.540434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:21:22.547503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:21:22.566320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:21:22.587794 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.607060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.607546 disk-uuid[603]: The operation has completed successfully. Jan 14 13:21:23.696066 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:21:23.696222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:21:23.734942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:21:23.740982 sh[689]: Success Jan 14 13:21:23.773953 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:21:23.979545 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:21:23.990839 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:21:24.001987 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:21:24.019795 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:21:24.019845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.025832 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:21:24.028647 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:21:24.031275 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:21:24.376420 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:21:24.379506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:21:24.388026 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:21:24.394689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:21:24.415035 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.423478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.423554 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:24.444802 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:24.459398 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:21:24.462055 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.470258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:21:24.483142 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:21:24.489160 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:24.497578 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:24.533621 systemd-networkd[873]: lo: Link UP Jan 14 13:21:24.533631 systemd-networkd[873]: lo: Gained carrier Jan 14 13:21:24.535818 systemd-networkd[873]: Enumeration completed Jan 14 13:21:24.536087 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:24.538821 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.538824 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:24.540769 systemd[1]: Reached target network.target - Network. Jan 14 13:21:24.600813 kernel: mlx5_core 7a61:00:02.0 enP31329s1: Link up Jan 14 13:21:24.634817 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: Data path switched to VF: enP31329s1 Jan 14 13:21:24.635417 systemd-networkd[873]: enP31329s1: Link UP Jan 14 13:21:24.635558 systemd-networkd[873]: eth0: Link UP Jan 14 13:21:24.635727 systemd-networkd[873]: eth0: Gained carrier Jan 14 13:21:24.635738 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.642081 systemd-networkd[873]: enP31329s1: Gained carrier Jan 14 13:21:24.670836 systemd-networkd[873]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:25.683150 ignition[868]: Ignition 2.20.0 Jan 14 13:21:25.683162 ignition[868]: Stage: fetch-offline Jan 14 13:21:25.683226 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.683237 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.683358 ignition[868]: parsed url from cmdline: "" Jan 14 13:21:25.683363 ignition[868]: no config URL provided Jan 14 13:21:25.683370 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.683380 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.683388 ignition[868]: failed to fetch config: resource requires networking Jan 14 13:21:25.685364 ignition[868]: Ignition finished successfully Jan 14 13:21:25.704198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:25.713967 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:21:25.731726 ignition[881]: Ignition 2.20.0 Jan 14 13:21:25.731738 ignition[881]: Stage: fetch Jan 14 13:21:25.731986 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.732000 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.732121 ignition[881]: parsed url from cmdline: "" Jan 14 13:21:25.732125 ignition[881]: no config URL provided Jan 14 13:21:25.732129 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.732136 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.732160 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:21:25.813433 ignition[881]: GET result: OK Jan 14 13:21:25.813560 ignition[881]: config has been read from IMDS userdata Jan 14 13:21:25.813583 ignition[881]: parsing config with SHA512: 008bdab6cdbe33ac97034464d2fb078713cb74c8290842423a498a4550e8dfa371ae65b267241d1e24edd35d31aee6f56b282530beb1486205d8fe757ff97b01 Jan 14 13:21:25.819569 unknown[881]: fetched base config from "system" Jan 14 13:21:25.820022 ignition[881]: fetch: fetch complete Jan 14 13:21:25.819577 unknown[881]: fetched base config from "system" Jan 14 13:21:25.820028 ignition[881]: fetch: fetch passed Jan 14 13:21:25.819584 unknown[881]: fetched user config from "azure" Jan 14 13:21:25.820086 ignition[881]: Ignition finished successfully Jan 14 13:21:25.823056 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:21:25.832935 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:21:25.847249 ignition[887]: Ignition 2.20.0 Jan 14 13:21:25.850481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:21:25.847255 ignition[887]: Stage: kargs Jan 14 13:21:25.847506 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.847515 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.848936 ignition[887]: kargs: kargs passed Jan 14 13:21:25.862006 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:21:25.848985 ignition[887]: Ignition finished successfully Jan 14 13:21:25.878547 ignition[893]: Ignition 2.20.0 Jan 14 13:21:25.878557 ignition[893]: Stage: disks Jan 14 13:21:25.880620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:21:25.878821 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.878834 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.879696 ignition[893]: disks: disks passed Jan 14 13:21:25.879740 ignition[893]: Ignition finished successfully Jan 14 13:21:25.894796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:25.898095 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:21:25.903920 systemd-networkd[873]: enP31329s1: Gained IPv6LL Jan 14 13:21:25.908810 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:25.913889 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:25.916436 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:25.927920 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:21:26.014570 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:21:26.019509 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:21:26.032969 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:21:26.124094 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:21:26.124704 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:21:26.128173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:26.178963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:26.185319 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:21:26.194978 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:21:26.196706 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) Jan 14 13:21:26.197200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:21:26.197712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:26.210139 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:26.210193 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:26.210208 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:26.216795 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:26.243004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:26.249695 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:21:26.260940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:21:26.286147 systemd-networkd[873]: eth0: Gained IPv6LL Jan 14 13:21:27.050173 coreos-metadata[914]: Jan 14 13:21:27.050 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:27.056447 coreos-metadata[914]: Jan 14 13:21:27.056 INFO Fetch successful Jan 14 13:21:27.059151 coreos-metadata[914]: Jan 14 13:21:27.056 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:27.069481 coreos-metadata[914]: Jan 14 13:21:27.069 INFO Fetch successful Jan 14 13:21:27.087635 coreos-metadata[914]: Jan 14 13:21:27.087 INFO wrote hostname ci-4152.2.0-a-4236615464 to /sysroot/etc/hostname Jan 14 13:21:27.090221 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:27.095790 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:21:27.119606 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:21:27.125520 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:21:27.130462 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:21:28.139848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:28.157991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:28.163955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:28.176872 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:28.182987 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.205362 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:28.212593 ignition[1036]: INFO : Ignition 2.20.0 Jan 14 13:21:28.212593 ignition[1036]: INFO : Stage: mount Jan 14 13:21:28.219845 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.219845 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.219845 ignition[1036]: INFO : mount: mount passed Jan 14 13:21:28.219845 ignition[1036]: INFO : Ignition finished successfully Jan 14 13:21:28.216912 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:28.235146 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:28.245006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:28.263450 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Jan 14 13:21:28.263525 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.266657 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:28.269257 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:28.274798 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:28.276649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:28.298162 ignition[1064]: INFO : Ignition 2.20.0 Jan 14 13:21:28.298162 ignition[1064]: INFO : Stage: files Jan 14 13:21:28.302544 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.302544 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.302544 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:28.315186 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:28.315186 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:28.379293 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:28.384504 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:28.384504 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:28.381326 unknown[1064]: wrote ssh authorized keys file for user: core Jan 14 13:21:28.398856 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.425214 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:21:28.716079 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 14 13:21:29.353845 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:29.353845 ignition[1064]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 14 13:21:29.370814 ignition[1064]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: files passed Jan 14 13:21:29.376560 ignition[1064]: INFO : Ignition finished successfully Jan 14 13:21:29.373250 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:29.403951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:29.410334 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:29.416609 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:29.418878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:29.429856 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.429856 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.440911 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.434847 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:29.441393 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:29.463051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:29.490731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:29.490886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:29.499951 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:29.502590 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:29.507593 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:29.514998 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:29.532289 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:29.542932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:29.555020 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:29.555183 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:29.555725 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:29.556802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:29.556942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:29.557608 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:29.558053 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:29.558457 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:29.558876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:29.559274 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:29.559693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:29.560102 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:29.560536 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:29.560950 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:29.561349 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:29.561767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:29.561904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:29.562643 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:29.563487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:29.563857 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:29.600999 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:29.606868 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:29.607028 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:29.662628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:29.662854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:29.673526 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:29.673675 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:29.681299 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:29.681478 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:29.700035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:29.702426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:29.704760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:29.708968 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:29.719904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:29.720204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:29.725459 ignition[1117]: INFO : Ignition 2.20.0 Jan 14 13:21:29.725459 ignition[1117]: INFO : Stage: umount Jan 14 13:21:29.725459 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:29.725459 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:29.740955 ignition[1117]: INFO : umount: umount passed Jan 14 13:21:29.740955 ignition[1117]: INFO : Ignition finished successfully Jan 14 13:21:29.732583 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:29.732914 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:29.748888 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:29.749002 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:29.762299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:29.762429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:29.768000 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:29.768118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:29.773658 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:29.776048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:29.781036 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:29.781089 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:29.785941 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:29.788106 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:29.788159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:29.798633 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:29.813226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:29.816110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:29.823183 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:29.825672 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:29.833071 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:29.833136 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:29.837749 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:29.837805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:29.842727 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:29.842809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:29.847328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:29.847384 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:29.852564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:29.857422 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:29.867823 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 14 13:21:29.875945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:29.878712 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:29.881214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:29.884862 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:29.884947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:29.898907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:29.906785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:29.906861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:29.916583 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:29.920179 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:29.920300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:29.942091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:29.944590 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:29.952605 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:29.952672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:29.957862 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:29.957906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:29.968293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:29.968356 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:29.973735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:29.973794 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:29.984065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:29.984129 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:29.996230 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: Data path switched from VF: enP31329s1 Jan 14 13:21:29.998987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:30.004754 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:30.004873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:30.013518 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:30.013576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:30.024427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:30.024491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:30.034711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:30.035170 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:30.040636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:30.040692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:30.049159 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:30.049256 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:30.059808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:30.063170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:30.112613 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:30.112792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:30.123658 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:30.126505 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:30.126573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:30.140975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:30.329226 systemd[1]: Switching root. Jan 14 13:21:30.361176 systemd-journald[177]: Journal stopped Jan 14 13:21:20.115958 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:21:20.115997 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.116013 kernel: BIOS-provided physical RAM map: Jan 14 13:21:20.116024 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:21:20.116035 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:21:20.116046 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:21:20.116059 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:21:20.116074 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:21:20.116086 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:21:20.116098 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:21:20.116110 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:21:20.116122 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:21:20.116132 kernel: NX (Execute Disable) protection: active Jan 14 13:21:20.116145 kernel: APIC: Static calls initialized Jan 14 13:21:20.116162 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:21:20.116176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:21:20.116205 kernel: random: crng init done Jan 14 13:21:20.116218 kernel: secureboot: Secure boot disabled Jan 14 13:21:20.116231 kernel: SMBIOS 3.1.0 present. Jan 14 13:21:20.116244 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:21:20.116257 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:21:20.116270 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:21:20.116283 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:21:20.116296 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:21:20.116312 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:21:20.116325 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:21:20.116338 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.116351 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:21:20.116365 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:21:20.116379 kernel: tsc: Detected 2593.907 MHz processor Jan 14 13:21:20.116392 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:21:20.116406 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:21:20.116419 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.116436 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:21:20.116448 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:21:20.116461 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:21:20.116474 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:21:20.116488 kernel: Using GB pages for direct mapping Jan 14 13:21:20.116501 kernel: ACPI: Early table checksum verification disabled Jan 14 13:21:20.116514 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:21:20.116534 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116551 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116564 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:21:20.116578 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:21:20.116592 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116606 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116621 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116638 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116653 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116667 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116681 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:21:20.116695 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:21:20.116709 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:21:20.116724 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:21:20.116739 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:21:20.116753 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:21:20.116770 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:21:20.116784 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:21:20.116798 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:21:20.116812 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:21:20.116826 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:21:20.116840 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:21:20.116854 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:21:20.116869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:21:20.116885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:21:20.116900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:21:20.116914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:21:20.116928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:21:20.116942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:21:20.116956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:21:20.116970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:21:20.116984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:21:20.116998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:21:20.117015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:21:20.117028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:21:20.117042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:21:20.117056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:21:20.117070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:21:20.117084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:21:20.117101 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:21:20.117116 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:21:20.117130 kernel: Zone ranges: Jan 14 13:21:20.117147 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:21:20.117160 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:21:20.117175 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.117248 kernel: Movable zone start for each node Jan 14 13:21:20.117261 kernel: Early memory node ranges Jan 14 13:21:20.117275 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:21:20.117290 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:21:20.117304 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:21:20.117318 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:21:20.117335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:21:20.117350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:21:20.117365 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:21:20.117379 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:21:20.117392 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:21:20.117406 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:21:20.117420 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:21:20.117434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:21:20.117448 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:21:20.117465 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:21:20.117479 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:21:20.117492 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:21:20.117503 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:21:20.117515 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:21:20.117529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:21:20.117544 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:21:20.117560 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:21:20.117575 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:21:20.117598 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:21:20.117615 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:21:20.117634 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.117653 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:21:20.117667 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:21:20.117681 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:21:20.117694 kernel: Fallback order for Node 0: 0 Jan 14 13:21:20.117707 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:21:20.117724 kernel: Policy zone: Normal Jan 14 13:21:20.117748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:21:20.117762 kernel: software IO TLB: area num 2. Jan 14 13:21:20.117780 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:21:20.117794 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:21:20.117808 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:21:20.117822 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:21:20.117836 kernel: Dynamic Preempt: voluntary Jan 14 13:21:20.117850 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:21:20.117865 kernel: rcu: RCU event tracing is enabled. Jan 14 13:21:20.117879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:21:20.117897 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:21:20.117911 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:21:20.117925 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:21:20.117939 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:21:20.117953 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:21:20.117970 kernel: Using NULL legacy PIC Jan 14 13:21:20.117983 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:21:20.117997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:21:20.118011 kernel: Console: colour dummy device 80x25 Jan 14 13:21:20.118025 kernel: printk: console [tty1] enabled Jan 14 13:21:20.118039 kernel: printk: console [ttyS0] enabled Jan 14 13:21:20.118053 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:21:20.118067 kernel: ACPI: Core revision 20230628 Jan 14 13:21:20.118081 kernel: Failed to register legacy timer interrupt Jan 14 13:21:20.118095 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:21:20.118112 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:21:20.118126 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:21:20.118140 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:21:20.118154 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:21:20.118168 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:21:20.118181 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:21:20.118208 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:21:20.118222 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:21:20.118236 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 14 13:21:20.118254 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:21:20.118268 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:21:20.118283 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:21:20.118297 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:21:20.118310 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:21:20.118324 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:21:20.118338 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:21:20.118352 kernel: RETBleed: Vulnerable Jan 14 13:21:20.118366 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:21:20.118379 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.118396 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:21:20.118410 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:21:20.118423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:21:20.118437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:21:20.118451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:21:20.118465 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:21:20.118479 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:21:20.118492 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:21:20.118506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:21:20.118520 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:21:20.118534 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:21:20.118550 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:21:20.118564 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:21:20.118578 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:21:20.118592 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:21:20.118606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:21:20.118619 kernel: landlock: Up and running. Jan 14 13:21:20.118633 kernel: SELinux: Initializing. Jan 14 13:21:20.118647 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.118661 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.118675 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:21:20.118689 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118706 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118721 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:21:20.118735 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:21:20.118749 kernel: signal: max sigframe size: 3632 Jan 14 13:21:20.118762 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:21:20.118776 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:21:20.118790 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:21:20.118804 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:21:20.118818 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:21:20.118835 kernel: .... node #0, CPUs: #1 Jan 14 13:21:20.118849 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:21:20.118864 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:21:20.118878 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:21:20.118892 kernel: smpboot: Max logical packages: 1 Jan 14 13:21:20.118906 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:21:20.118920 kernel: devtmpfs: initialized Jan 14 13:21:20.118934 kernel: x86/mm: Memory block size: 128MB Jan 14 13:21:20.118951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:21:20.118965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:21:20.118979 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:21:20.118993 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:21:20.119008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:21:20.119021 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:21:20.119035 kernel: audit: type=2000 audit(1736860878.029:1): state=initialized audit_enabled=0 res=1 Jan 14 13:21:20.119049 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:21:20.119063 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:21:20.119080 kernel: cpuidle: using governor menu Jan 14 13:21:20.119094 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:21:20.119107 kernel: dca service started, version 1.12.1 Jan 14 13:21:20.119121 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:21:20.119135 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:21:20.119149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:21:20.119164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:21:20.119178 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:21:20.124823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:21:20.124853 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:21:20.124867 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:21:20.124881 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:21:20.124894 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:21:20.124908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:21:20.124922 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:21:20.124936 kernel: ACPI: Interpreter enabled Jan 14 13:21:20.124951 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:21:20.124967 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:21:20.124983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:21:20.124996 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:21:20.125010 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:21:20.125024 kernel: iommu: Default domain type: Translated Jan 14 13:21:20.125038 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:21:20.125054 kernel: efivars: Registered efivars operations Jan 14 13:21:20.125069 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:21:20.125084 kernel: PCI: System does not support PCI Jan 14 13:21:20.125096 kernel: vgaarb: loaded Jan 14 13:21:20.125113 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:21:20.125125 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:21:20.125137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:21:20.125151 kernel: pnp: PnP ACPI init Jan 14 13:21:20.125166 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:21:20.125181 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:21:20.125207 kernel: NET: Registered PF_INET protocol family Jan 14 13:21:20.125228 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:21:20.125241 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:21:20.129117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:21:20.129142 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:21:20.129160 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:21:20.129174 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:21:20.129203 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.129219 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:21:20.129232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:21:20.129245 kernel: NET: Registered PF_XDP protocol family Jan 14 13:21:20.129259 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:21:20.129279 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:21:20.129293 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:21:20.129306 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:21:20.129320 kernel: Initialise system trusted keyrings Jan 14 13:21:20.129333 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:21:20.129347 kernel: Key type asymmetric registered Jan 14 13:21:20.129360 kernel: Asymmetric key parser 'x509' registered Jan 14 13:21:20.129373 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:21:20.129388 kernel: io scheduler mq-deadline registered Jan 14 13:21:20.129405 kernel: io scheduler kyber registered Jan 14 13:21:20.129418 kernel: io scheduler bfq registered Jan 14 13:21:20.129433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:21:20.129447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:21:20.129461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:21:20.129475 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:21:20.129489 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:21:20.129699 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:21:20.129821 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:21:19 UTC (1736860879) Jan 14 13:21:20.129932 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:21:20.129949 kernel: intel_pstate: CPU model not supported Jan 14 13:21:20.129962 kernel: efifb: probing for efifb Jan 14 13:21:20.129976 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:21:20.129990 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:21:20.130004 kernel: efifb: scrolling: redraw Jan 14 13:21:20.130017 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:21:20.130031 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:20.130048 kernel: fb0: EFI VGA frame buffer device Jan 14 13:21:20.130062 kernel: pstore: Using crash dump compression: deflate Jan 14 13:21:20.130076 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:21:20.130090 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:21:20.130104 kernel: Segment Routing with IPv6 Jan 14 13:21:20.130117 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:21:20.130131 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:21:20.130145 kernel: Key type dns_resolver registered Jan 14 13:21:20.130159 kernel: IPI shorthand broadcast: enabled Jan 14 13:21:20.130177 kernel: sched_clock: Marking stable (962003600, 56487400)->(1273384300, -254893300) Jan 14 13:21:20.130218 kernel: registered taskstats version 1 Jan 14 13:21:20.130232 kernel: Loading compiled-in X.509 certificates Jan 14 13:21:20.130246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:21:20.130260 kernel: Key type .fscrypt registered Jan 14 13:21:20.130273 kernel: Key type fscrypt-provisioning registered Jan 14 13:21:20.130287 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:21:20.130301 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:21:20.130318 kernel: ima: No architecture policies found Jan 14 13:21:20.130332 kernel: clk: Disabling unused clocks Jan 14 13:21:20.130346 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:21:20.130360 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:21:20.130373 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:21:20.130387 kernel: Run /init as init process Jan 14 13:21:20.130400 kernel: with arguments: Jan 14 13:21:20.130414 kernel: /init Jan 14 13:21:20.130427 kernel: with environment: Jan 14 13:21:20.130440 kernel: HOME=/ Jan 14 13:21:20.130455 kernel: TERM=linux Jan 14 13:21:20.130469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:21:20.130486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:20.130502 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:20.130516 systemd[1]: Detected architecture x86-64. Jan 14 13:21:20.130530 systemd[1]: Running in initrd. Jan 14 13:21:20.130544 systemd[1]: No hostname configured, using default hostname. Jan 14 13:21:20.130560 systemd[1]: Hostname set to . Jan 14 13:21:20.130575 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:20.130589 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:21:20.130604 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:20.130618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:20.130634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:21:20.130649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:20.130663 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:21:20.130680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:21:20.130696 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:21:20.130711 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:21:20.130725 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:20.130739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:20.130753 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:20.130768 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:20.130785 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:20.130799 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:20.130813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:20.130828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:20.130842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:21:20.130856 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:21:20.130871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:20.130885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:20.130900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:20.130917 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:20.130931 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:21:20.130945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:20.130959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:21:20.130973 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:21:20.130988 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:20.131002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:20.131017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:20.131060 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:21:20.131093 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:20.131108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:20.131123 systemd-journald[177]: Journal started Jan 14 13:21:20.131160 systemd-journald[177]: Runtime Journal (/run/log/journal/cbd3cb8f6df74f63a66d486c82e7985e) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:20.104143 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:21:20.140232 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:20.143674 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:21:20.150105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:20.162074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:21:20.162273 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:20.170145 kernel: Bridge firewalling registered Jan 14 13:21:20.170484 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:21:20.172428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:20.173422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:20.187746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:20.192542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:20.196226 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:20.200969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:20.201968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:20.226447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:20.231406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:20.244731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:20.253373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:21:20.270370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:20.276597 dracut-cmdline[214]: dracut-dracut-053 Jan 14 13:21:20.276597 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:21:20.327025 systemd-resolved[215]: Positive Trust Anchors: Jan 14 13:21:20.330123 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:20.330203 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:20.355983 systemd-resolved[215]: Defaulting to hostname 'linux'. Jan 14 13:21:20.359596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:20.365426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:20.377209 kernel: SCSI subsystem initialized Jan 14 13:21:20.387204 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:21:20.398209 kernel: iscsi: registered transport (tcp) Jan 14 13:21:20.420762 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:21:20.420864 kernel: QLogic iSCSI HBA Driver Jan 14 13:21:20.457144 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:20.466383 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:21:20.495501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:21:20.495606 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:21:20.500214 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:21:20.540219 kernel: raid6: avx512x4 gen() 18021 MB/s Jan 14 13:21:20.559201 kernel: raid6: avx512x2 gen() 18382 MB/s Jan 14 13:21:20.578198 kernel: raid6: avx512x1 gen() 18162 MB/s Jan 14 13:21:20.598203 kernel: raid6: avx2x4 gen() 18389 MB/s Jan 14 13:21:20.617201 kernel: raid6: avx2x2 gen() 18322 MB/s Jan 14 13:21:20.637239 kernel: raid6: avx2x1 gen() 13265 MB/s Jan 14 13:21:20.637286 kernel: raid6: using algorithm avx2x4 gen() 18389 MB/s Jan 14 13:21:20.658715 kernel: raid6: .... xor() 6547 MB/s, rmw enabled Jan 14 13:21:20.658760 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:21:20.682216 kernel: xor: automatically using best checksumming function avx Jan 14 13:21:20.829217 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:21:20.838836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:20.848473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:20.861476 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 14 13:21:20.868095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:20.885356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:21:20.898759 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 14 13:21:20.925623 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:20.933458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:20.972732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:20.989847 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:21:21.023034 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:21.030105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:21.036742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:21.042689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:21.053374 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:21:21.067451 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:21:21.071936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:21.075301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.082159 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.085320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.085496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.088654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.112370 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:21:21.106640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.109807 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:21.120988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:21.122587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.133653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:21.158212 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:21:21.158270 kernel: AES CTR mode by8 optimization enabled Jan 14 13:21:21.163389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:21.174362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:21:21.190499 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:21:21.195339 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:21:21.195386 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:21:21.204200 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:21:21.215433 kernel: PTP clock support registered Jan 14 13:21:21.215484 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:21:21.221235 kernel: scsi host1: storvsc_host_t Jan 14 13:21:21.221282 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:21:21.227564 kernel: scsi host0: storvsc_host_t Jan 14 13:21:21.227645 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:21:21.233376 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:21:21.234905 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:21:21.240558 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:21:21.240600 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:21:21.681811 systemd-resolved[215]: Clock change detected. Flushing caches. Jan 14 13:21:21.687149 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:21:21.692416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:21.694589 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:21:21.694616 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:21:21.711846 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:21:21.720842 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:21:21.727793 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:21:21.743308 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:21:21.750474 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:21:21.761323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:21:21.761346 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:21:21.761528 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:21:21.761704 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:21:21.761910 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:21:21.762077 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:21:21.762242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:21.762262 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:21:21.926431 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: VF slot 1 added Jan 14 13:21:21.934841 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:21:21.938790 kernel: hv_pci e597a863-7a61-490b-b082-a1f8275076e8: PCI VMBus probing: Using version 0x10004 Jan 14 13:21:21.998446 kernel: hv_pci e597a863-7a61-490b-b082-a1f8275076e8: PCI host bridge to bus 7a61:00 Jan 14 13:21:21.998705 kernel: pci_bus 7a61:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:21:21.998924 kernel: pci_bus 7a61:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:21:21.999078 kernel: pci 7a61:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:21:21.999287 kernel: pci 7a61:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:21.999462 kernel: pci 7a61:00:02.0: enabling Extended Tags Jan 14 13:21:21.999641 kernel: pci 7a61:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7a61:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:21:21.999837 kernel: pci_bus 7a61:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:21:21.999986 kernel: pci 7a61:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:21:22.178255 kernel: mlx5_core 7a61:00:02.0: enabling device (0000 -> 0002) Jan 14 13:21:22.423105 kernel: mlx5_core 7a61:00:02.0: firmware version: 14.30.5000 Jan 14 13:21:22.423887 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: VF registering: eth1 Jan 14 13:21:22.424077 kernel: mlx5_core 7a61:00:02.0 eth1: joined to eth0 Jan 14 13:21:22.424312 kernel: mlx5_core 7a61:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:21:22.349630 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:21:22.438819 kernel: mlx5_core 7a61:00:02.0 enP31329s1: renamed from eth1 Jan 14 13:21:22.449316 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 14 13:21:22.476523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:21:22.492070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:22.521791 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (450) Jan 14 13:21:22.540434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:21:22.547503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:21:22.566320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:21:22.587794 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.607060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:21:23.607546 disk-uuid[603]: The operation has completed successfully. Jan 14 13:21:23.696066 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:21:23.696222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:21:23.734942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:21:23.740982 sh[689]: Success Jan 14 13:21:23.773953 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:21:23.979545 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:21:23.990839 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:21:24.001987 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:21:24.019795 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:21:24.019845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.025832 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:21:24.028647 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:21:24.031275 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:21:24.376420 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:21:24.379506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:21:24.388026 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:21:24.394689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:21:24.415035 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.423478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:24.423554 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:24.444802 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:24.459398 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:21:24.462055 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:24.470258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:21:24.483142 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:21:24.489160 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:24.497578 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:24.533621 systemd-networkd[873]: lo: Link UP Jan 14 13:21:24.533631 systemd-networkd[873]: lo: Gained carrier Jan 14 13:21:24.535818 systemd-networkd[873]: Enumeration completed Jan 14 13:21:24.536087 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:24.538821 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.538824 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:24.540769 systemd[1]: Reached target network.target - Network. Jan 14 13:21:24.600813 kernel: mlx5_core 7a61:00:02.0 enP31329s1: Link up Jan 14 13:21:24.634817 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: Data path switched to VF: enP31329s1 Jan 14 13:21:24.635417 systemd-networkd[873]: enP31329s1: Link UP Jan 14 13:21:24.635558 systemd-networkd[873]: eth0: Link UP Jan 14 13:21:24.635727 systemd-networkd[873]: eth0: Gained carrier Jan 14 13:21:24.635738 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:24.642081 systemd-networkd[873]: enP31329s1: Gained carrier Jan 14 13:21:24.670836 systemd-networkd[873]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:25.683150 ignition[868]: Ignition 2.20.0 Jan 14 13:21:25.683162 ignition[868]: Stage: fetch-offline Jan 14 13:21:25.683226 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.683237 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.683358 ignition[868]: parsed url from cmdline: "" Jan 14 13:21:25.683363 ignition[868]: no config URL provided Jan 14 13:21:25.683370 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.683380 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.683388 ignition[868]: failed to fetch config: resource requires networking Jan 14 13:21:25.685364 ignition[868]: Ignition finished successfully Jan 14 13:21:25.704198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:25.713967 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:21:25.731726 ignition[881]: Ignition 2.20.0 Jan 14 13:21:25.731738 ignition[881]: Stage: fetch Jan 14 13:21:25.731986 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.732000 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.732121 ignition[881]: parsed url from cmdline: "" Jan 14 13:21:25.732125 ignition[881]: no config URL provided Jan 14 13:21:25.732129 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:21:25.732136 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:21:25.732160 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:21:25.813433 ignition[881]: GET result: OK Jan 14 13:21:25.813560 ignition[881]: config has been read from IMDS userdata Jan 14 13:21:25.813583 ignition[881]: parsing config with SHA512: 008bdab6cdbe33ac97034464d2fb078713cb74c8290842423a498a4550e8dfa371ae65b267241d1e24edd35d31aee6f56b282530beb1486205d8fe757ff97b01 Jan 14 13:21:25.819569 unknown[881]: fetched base config from "system" Jan 14 13:21:25.820022 ignition[881]: fetch: fetch complete Jan 14 13:21:25.819577 unknown[881]: fetched base config from "system" Jan 14 13:21:25.820028 ignition[881]: fetch: fetch passed Jan 14 13:21:25.819584 unknown[881]: fetched user config from "azure" Jan 14 13:21:25.820086 ignition[881]: Ignition finished successfully Jan 14 13:21:25.823056 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:21:25.832935 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:21:25.847249 ignition[887]: Ignition 2.20.0 Jan 14 13:21:25.850481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:21:25.847255 ignition[887]: Stage: kargs Jan 14 13:21:25.847506 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.847515 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.848936 ignition[887]: kargs: kargs passed Jan 14 13:21:25.862006 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:21:25.848985 ignition[887]: Ignition finished successfully Jan 14 13:21:25.878547 ignition[893]: Ignition 2.20.0 Jan 14 13:21:25.878557 ignition[893]: Stage: disks Jan 14 13:21:25.880620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:21:25.878821 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:25.878834 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:25.879696 ignition[893]: disks: disks passed Jan 14 13:21:25.879740 ignition[893]: Ignition finished successfully Jan 14 13:21:25.894796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:25.898095 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:21:25.903920 systemd-networkd[873]: enP31329s1: Gained IPv6LL Jan 14 13:21:25.908810 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:25.913889 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:25.916436 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:25.927920 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:21:26.014570 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:21:26.019509 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:21:26.032969 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:21:26.124094 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:21:26.124704 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:21:26.128173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:21:26.178963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:26.185319 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:21:26.194978 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:21:26.196706 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (912) Jan 14 13:21:26.197200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:21:26.197712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:26.210139 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:26.210193 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:26.210208 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:26.216795 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:26.243004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:26.249695 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:21:26.260940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:21:26.286147 systemd-networkd[873]: eth0: Gained IPv6LL Jan 14 13:21:27.050173 coreos-metadata[914]: Jan 14 13:21:27.050 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:27.056447 coreos-metadata[914]: Jan 14 13:21:27.056 INFO Fetch successful Jan 14 13:21:27.059151 coreos-metadata[914]: Jan 14 13:21:27.056 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:27.069481 coreos-metadata[914]: Jan 14 13:21:27.069 INFO Fetch successful Jan 14 13:21:27.087635 coreos-metadata[914]: Jan 14 13:21:27.087 INFO wrote hostname ci-4152.2.0-a-4236615464 to /sysroot/etc/hostname Jan 14 13:21:27.090221 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:27.095790 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:21:27.119606 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:21:27.125520 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:21:27.130462 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:21:28.139848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:28.157991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:21:28.163955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:21:28.176872 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:21:28.182987 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.205362 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:21:28.212593 ignition[1036]: INFO : Ignition 2.20.0 Jan 14 13:21:28.212593 ignition[1036]: INFO : Stage: mount Jan 14 13:21:28.219845 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.219845 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.219845 ignition[1036]: INFO : mount: mount passed Jan 14 13:21:28.219845 ignition[1036]: INFO : Ignition finished successfully Jan 14 13:21:28.216912 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:21:28.235146 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:21:28.245006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:21:28.263450 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Jan 14 13:21:28.263525 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:21:28.266657 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:21:28.269257 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:21:28.274798 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:21:28.276649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:21:28.298162 ignition[1064]: INFO : Ignition 2.20.0 Jan 14 13:21:28.298162 ignition[1064]: INFO : Stage: files Jan 14 13:21:28.302544 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:28.302544 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:28.302544 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:21:28.315186 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:21:28.315186 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:21:28.379293 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:21:28.384504 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:21:28.384504 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:21:28.381326 unknown[1064]: wrote ssh authorized keys file for user: core Jan 14 13:21:28.398856 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.403737 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:21:28.425214 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:28.430539 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:21:28.716079 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 14 13:21:29.353845 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:21:29.353845 ignition[1064]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 14 13:21:29.370814 ignition[1064]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:21:29.376560 ignition[1064]: INFO : files: files passed Jan 14 13:21:29.376560 ignition[1064]: INFO : Ignition finished successfully Jan 14 13:21:29.373250 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:21:29.403951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:21:29.410334 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:21:29.416609 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:21:29.418878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:21:29.429856 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.429856 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.440911 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:21:29.434847 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:29.441393 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:21:29.463051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:21:29.490731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:21:29.490886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:21:29.499951 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:21:29.502590 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:21:29.507593 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:21:29.514998 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:21:29.532289 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:29.542932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:21:29.555020 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:29.555183 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:29.555725 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:21:29.556802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:21:29.556942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:21:29.557608 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:21:29.558053 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:21:29.558457 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:21:29.558876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:21:29.559274 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:21:29.559693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:21:29.560102 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:21:29.560536 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:21:29.560950 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:21:29.561349 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:21:29.561767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:21:29.561904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:21:29.562643 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:29.563487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:29.563857 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:21:29.600999 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:29.606868 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:21:29.607028 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:21:29.662628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:21:29.662854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:21:29.673526 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:21:29.673675 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:21:29.681299 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:21:29.681478 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:21:29.700035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:21:29.702426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:21:29.704760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:29.708968 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:21:29.719904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:21:29.720204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:29.725459 ignition[1117]: INFO : Ignition 2.20.0 Jan 14 13:21:29.725459 ignition[1117]: INFO : Stage: umount Jan 14 13:21:29.725459 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:21:29.725459 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:21:29.740955 ignition[1117]: INFO : umount: umount passed Jan 14 13:21:29.740955 ignition[1117]: INFO : Ignition finished successfully Jan 14 13:21:29.732583 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:21:29.732914 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:21:29.748888 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:21:29.749002 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:21:29.762299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:21:29.762429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:21:29.768000 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:21:29.768118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:21:29.773658 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:21:29.776048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:21:29.781036 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:21:29.781089 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:21:29.785941 systemd[1]: Stopped target network.target - Network. Jan 14 13:21:29.788106 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:21:29.788159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:21:29.798633 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:21:29.813226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:21:29.816110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:29.823183 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:21:29.825672 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:21:29.833071 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:21:29.833136 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:21:29.837749 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:21:29.837805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:21:29.842727 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:21:29.842809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:21:29.847328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:21:29.847384 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:21:29.852564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:21:29.857422 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:21:29.867823 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 14 13:21:29.875945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:21:29.878712 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:21:29.881214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:21:29.884862 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:21:29.884947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:29.898907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:21:29.906785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:21:29.906861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:21:29.916583 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:29.920179 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:21:29.920300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:21:29.942091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:21:29.944590 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:29.952605 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:21:29.952672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:29.957862 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:21:29.957906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:29.968293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:21:29.968356 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:21:29.973735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:21:29.973794 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:21:29.984065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:21:29.984129 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:21:29.996230 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: Data path switched from VF: enP31329s1 Jan 14 13:21:29.998987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:21:30.004754 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:21:30.004873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:30.013518 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:21:30.013576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:30.024427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:21:30.024491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:30.034711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:21:30.035170 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:30.040636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:30.040692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:30.049159 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:21:30.049256 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:21:30.059808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:21:30.063170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:21:30.112613 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:21:30.112792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:21:30.123658 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:21:30.126505 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:21:30.126573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:21:30.140975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:21:30.329226 systemd[1]: Switching root. Jan 14 13:21:30.361176 systemd-journald[177]: Journal stopped Jan 14 13:21:36.096322 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:21:36.096354 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:21:36.096366 kernel: SELinux: policy capability open_perms=1 Jan 14 13:21:36.096376 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:21:36.096384 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:21:36.096395 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:21:36.096404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:21:36.096417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:21:36.096425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:21:36.096435 kernel: audit: type=1403 audit(1736860892.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:21:36.096447 systemd[1]: Successfully loaded SELinux policy in 211.368ms. Jan 14 13:21:36.096459 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.711ms. Jan 14 13:21:36.096470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:21:36.096480 systemd[1]: Detected virtualization microsoft. Jan 14 13:21:36.096494 systemd[1]: Detected architecture x86-64. Jan 14 13:21:36.096507 systemd[1]: Detected first boot. Jan 14 13:21:36.096517 systemd[1]: Hostname set to . Jan 14 13:21:36.096529 systemd[1]: Initializing machine ID from random generator. Jan 14 13:21:36.096538 zram_generator::config[1177]: No configuration found. Jan 14 13:21:36.096553 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:21:36.096563 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:21:36.096575 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:21:36.096586 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:21:36.096597 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:21:36.096608 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:21:36.096620 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:21:36.096633 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:21:36.096645 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:21:36.096655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:21:36.096667 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:21:36.096678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:21:36.096690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:21:36.096700 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:21:36.096716 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:21:36.096728 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:21:36.096739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:21:36.096749 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:21:36.096761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:21:36.096779 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:21:36.096790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:21:36.096806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:21:36.096817 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:21:36.096831 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:21:36.096844 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:21:36.096854 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:21:36.096867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:21:36.096877 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:21:36.096890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:21:36.096900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:21:36.096914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:21:36.096925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:21:36.096938 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:21:36.096950 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:21:36.096962 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:21:36.096977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:36.096988 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:21:36.097000 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:21:36.097012 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:21:36.097024 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:21:36.097035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:36.097047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:21:36.097058 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:21:36.097072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:36.097085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:36.097095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:36.097108 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:21:36.097118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:36.097131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:21:36.097142 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 14 13:21:36.097156 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 14 13:21:36.097169 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:21:36.097181 kernel: fuse: init (API version 7.39) Jan 14 13:21:36.097190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:21:36.097201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:21:36.097215 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:21:36.097225 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:21:36.097250 systemd-journald[1276]: Collecting audit messages is disabled. Jan 14 13:21:36.097273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:36.097284 systemd-journald[1276]: Journal started Jan 14 13:21:36.097305 systemd-journald[1276]: Runtime Journal (/run/log/journal/a9c17ac0ebfd442fb89031bd2855d043) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:21:36.117796 kernel: loop: module loaded Jan 14 13:21:36.117870 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:21:36.124331 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:21:36.127692 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:21:36.130802 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:21:36.133315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:21:36.136486 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:21:36.139579 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:21:36.142516 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:21:36.145992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:21:36.149589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:21:36.149896 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:21:36.154528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:36.154735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:36.158426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:36.158636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:36.162617 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:21:36.162861 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:21:36.166469 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:36.166818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:36.170395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:21:36.177798 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:21:36.181580 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:21:36.237913 kernel: ACPI: bus type drm_connector registered Jan 14 13:21:36.233507 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:21:36.245002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:21:36.250823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:21:36.254686 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:21:36.263014 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:21:36.273015 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:21:36.276503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:36.286311 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:21:36.291227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:36.292683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:21:36.303462 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:21:36.316109 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:36.319051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:36.323362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:21:36.329751 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:21:36.335037 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:21:36.343322 systemd-journald[1276]: Time spent on flushing to /var/log/journal/a9c17ac0ebfd442fb89031bd2855d043 is 32.707ms for 930 entries. Jan 14 13:21:36.343322 systemd-journald[1276]: System Journal (/var/log/journal/a9c17ac0ebfd442fb89031bd2855d043) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:21:36.403725 systemd-journald[1276]: Received client request to flush runtime journal. Jan 14 13:21:36.354718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:21:36.361764 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:21:36.376571 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:21:36.401741 udevadm[1344]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 13:21:36.405235 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:21:36.491869 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 14 13:21:36.491923 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 14 13:21:36.493146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:21:36.506678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:21:36.522108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:21:36.881364 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:21:36.891556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:21:36.909057 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 14 13:21:36.909081 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 14 13:21:36.913490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:21:38.018240 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:21:38.031053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:21:38.057805 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Jan 14 13:21:38.392340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:21:38.406954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:21:38.462564 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 14 13:21:38.491186 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:21:38.576717 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:21:38.645309 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:21:38.650228 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:21:38.650315 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:21:38.656592 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:21:38.664421 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:21:38.664502 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:21:38.669812 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:21:38.675045 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:21:38.703199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:38.737883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:21:38.738820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:38.823135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:21:38.925377 systemd-networkd[1371]: lo: Link UP Jan 14 13:21:38.926698 systemd-networkd[1371]: lo: Gained carrier Jan 14 13:21:38.929984 systemd-networkd[1371]: Enumeration completed Jan 14 13:21:38.930246 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:21:38.932226 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:38.934803 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:38.940862 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:21:38.955823 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1377) Jan 14 13:21:38.996870 kernel: mlx5_core 7a61:00:02.0 enP31329s1: Link up Jan 14 13:21:39.016800 kernel: hv_netvsc 000d3a7f-53c0-000d-3a7f-53c0000d3a7f eth0: Data path switched to VF: enP31329s1 Jan 14 13:21:39.029548 systemd-networkd[1371]: enP31329s1: Link UP Jan 14 13:21:39.030366 systemd-networkd[1371]: eth0: Link UP Jan 14 13:21:39.032810 systemd-networkd[1371]: eth0: Gained carrier Jan 14 13:21:39.035829 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:39.041959 systemd-networkd[1371]: enP31329s1: Gained carrier Jan 14 13:21:39.057156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:21:39.081897 systemd-networkd[1371]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:39.104795 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:21:39.139087 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:21:39.145000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:21:39.278856 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:39.314192 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:21:39.322237 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:21:39.331930 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:21:39.341128 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:21:39.364452 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:21:39.372715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:21:39.373604 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:21:39.374240 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:21:39.376493 systemd[1]: Reached target machines.target - Containers. Jan 14 13:21:39.381596 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:21:39.391102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:21:39.394717 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:21:39.397858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:39.399237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:21:39.406080 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:21:39.419959 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:21:39.423953 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:21:39.452821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:21:39.493804 kernel: loop0: detected capacity change from 0 to 211296 Jan 14 13:21:39.546976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:21:39.549726 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:21:39.577833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:21:39.612481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:21:39.622796 kernel: loop1: detected capacity change from 0 to 28272 Jan 14 13:21:40.034808 kernel: loop2: detected capacity change from 0 to 140992 Jan 14 13:21:40.365973 systemd-networkd[1371]: enP31329s1: Gained IPv6LL Jan 14 13:21:40.550803 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:21:40.685927 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 14 13:21:40.689203 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:21:40.958809 kernel: loop4: detected capacity change from 0 to 211296 Jan 14 13:21:40.981823 kernel: loop5: detected capacity change from 0 to 28272 Jan 14 13:21:40.990828 kernel: loop6: detected capacity change from 0 to 140992 Jan 14 13:21:41.003799 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:21:41.019307 (sd-merge)[1508]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:21:41.019926 (sd-merge)[1508]: Merged extensions into '/usr'. Jan 14 13:21:41.023584 systemd[1]: Reloading requested from client PID 1488 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:21:41.023598 systemd[1]: Reloading... Jan 14 13:21:41.078810 zram_generator::config[1531]: No configuration found. Jan 14 13:21:41.245361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:41.321867 systemd[1]: Reloading finished in 297 ms. Jan 14 13:21:41.343026 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:21:41.356957 systemd[1]: Starting ensure-sysext.service... Jan 14 13:21:41.361978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:21:41.371050 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:21:41.371071 systemd[1]: Reloading... Jan 14 13:21:41.394075 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:21:41.394585 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:21:41.396260 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:21:41.396691 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 14 13:21:41.396789 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 14 13:21:41.417259 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:41.417430 systemd-tmpfiles[1600]: Skipping /boot Jan 14 13:21:41.431392 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:21:41.431846 systemd-tmpfiles[1600]: Skipping /boot Jan 14 13:21:41.453864 zram_generator::config[1629]: No configuration found. Jan 14 13:21:41.608977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:41.686658 systemd[1]: Reloading finished in 315 ms. Jan 14 13:21:41.712433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:21:41.726901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:21:41.762928 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:21:41.770084 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:21:41.777001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:21:41.787123 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:21:41.801520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.802882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:41.811381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:41.819416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:41.826049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:41.830295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:41.831924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.833833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:41.834045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:41.847571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:41.849113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:41.855213 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:41.857030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:41.877304 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:21:41.887704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.890166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:41.899066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:41.903912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:41.916518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:41.922195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:41.922382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.925072 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:21:41.928972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:41.929111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:41.932390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:41.932528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:41.936641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:41.936876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:41.954221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.954618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:21:41.958576 systemd-resolved[1700]: Positive Trust Anchors: Jan 14 13:21:41.958764 systemd-resolved[1700]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:21:41.958841 systemd-resolved[1700]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:21:41.960117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:21:41.967035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:21:41.972048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:21:41.985198 systemd-resolved[1700]: Using system hostname 'ci-4152.2.0-a-4236615464'. Jan 14 13:21:41.987478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:21:41.994432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:21:41.994677 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:21:41.997729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:21:41.999192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:21:42.003001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:21:42.003821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:21:42.005313 augenrules[1753]: No rules Jan 14 13:21:42.007607 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:21:42.007957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:21:42.011205 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:21:42.011431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:21:42.014668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:21:42.014932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:21:42.018653 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:21:42.018876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:21:42.024666 systemd[1]: Finished ensure-sysext.service. Jan 14 13:21:42.032269 systemd[1]: Reached target network.target - Network. Jan 14 13:21:42.034834 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:21:42.037745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:21:42.040814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:21:42.040890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:21:42.616205 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:21:42.620457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:21:45.931462 ldconfig[1485]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:21:45.943565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:21:45.952958 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:21:45.980228 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:21:45.984171 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:21:45.987639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:21:45.991006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:21:45.994723 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:21:45.997562 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:21:46.000730 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:21:46.003868 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:21:46.003927 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:21:46.006183 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:21:46.009424 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:21:46.014456 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:21:46.036730 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:21:46.040009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:21:46.045331 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:21:46.047872 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:21:46.050504 systemd[1]: System is tainted: cgroupsv1 Jan 14 13:21:46.050572 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:46.050610 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:21:46.081905 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:21:46.093921 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:21:46.111917 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:21:46.123983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:21:46.130889 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:21:46.141188 (chronyd)[1776]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:21:46.145162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:21:46.149899 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:21:46.153585 jq[1784]: false Jan 14 13:21:46.149968 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:21:46.153291 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:21:46.157900 chronyd[1788]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:21:46.161117 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:21:46.169890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:46.177947 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:21:46.178529 KVP[1786]: KVP starting; pid is:1786 Jan 14 13:21:46.182454 chronyd[1788]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:21:46.182714 chronyd[1788]: Loaded seccomp filter (level 2) Jan 14 13:21:46.194901 extend-filesystems[1785]: Found loop4 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found loop5 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found loop6 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found loop7 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda1 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda2 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda3 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found usr Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda4 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda6 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda7 Jan 14 13:21:46.194901 extend-filesystems[1785]: Found sda9 Jan 14 13:21:46.194901 extend-filesystems[1785]: Checking size of /dev/sda9 Jan 14 13:21:46.215944 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:21:46.191697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:21:46.215151 KVP[1786]: KVP LIC Version: 3.1 Jan 14 13:21:46.203947 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:21:46.234978 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:21:46.246090 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:21:46.254174 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:21:46.257031 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:21:46.272965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:21:46.280089 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:21:46.281536 jq[1812]: true Jan 14 13:21:46.290251 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:21:46.290568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:21:46.291998 extend-filesystems[1785]: Old size kept for /dev/sda9 Jan 14 13:21:46.294427 extend-filesystems[1785]: Found sr0 Jan 14 13:21:46.314315 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:21:46.315382 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:21:46.321582 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:21:46.322261 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:21:46.332582 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:21:46.335447 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:21:46.335746 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:21:46.353203 jq[1827]: true Jan 14 13:21:46.365264 (ntainerd)[1829]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:21:46.392625 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1845) Jan 14 13:21:46.400606 dbus-daemon[1780]: [system] SELinux support is enabled Jan 14 13:21:46.401906 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:21:46.414575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:21:46.414625 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:21:46.418199 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:21:46.418233 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:21:46.439935 update_engine[1809]: I20250114 13:21:46.437736 1809 main.cc:92] Flatcar Update Engine starting Jan 14 13:21:46.441619 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:21:46.445852 update_engine[1809]: I20250114 13:21:46.445647 1809 update_check_scheduler.cc:74] Next update check in 6m24s Jan 14 13:21:46.448249 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:21:46.455951 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:21:46.509529 systemd-logind[1807]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:21:46.514016 systemd-logind[1807]: New seat seat0. Jan 14 13:21:46.523093 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:21:46.563946 coreos-metadata[1779]: Jan 14 13:21:46.563 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:21:46.570901 coreos-metadata[1779]: Jan 14 13:21:46.570 INFO Fetch successful Jan 14 13:21:46.570901 coreos-metadata[1779]: Jan 14 13:21:46.570 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:21:46.576416 coreos-metadata[1779]: Jan 14 13:21:46.576 INFO Fetch successful Jan 14 13:21:46.576591 coreos-metadata[1779]: Jan 14 13:21:46.576 INFO Fetching http://168.63.129.16/machine/89701126-b14f-4320-8b18-a480b0f061d9/d4916944%2Df597%2D4161%2Dbf9b%2D3d50ebdefc13.%5Fci%2D4152.2.0%2Da%2D4236615464?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:21:46.578296 coreos-metadata[1779]: Jan 14 13:21:46.578 INFO Fetch successful Jan 14 13:21:46.580765 coreos-metadata[1779]: Jan 14 13:21:46.578 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:21:46.592169 coreos-metadata[1779]: Jan 14 13:21:46.592 INFO Fetch successful Jan 14 13:21:46.619699 bash[1889]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:21:46.623610 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:21:46.629548 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:21:46.649305 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:21:46.656606 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:21:46.766943 locksmithd[1882]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:21:46.826503 sshd_keygen[1851]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:21:46.852604 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:21:46.864054 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:21:46.873039 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:21:46.881734 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:21:46.882101 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:21:46.900271 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:21:46.917941 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:21:46.925675 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:21:46.935543 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:21:46.941016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:21:46.948134 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:21:47.749100 containerd[1829]: time="2025-01-14T13:21:47.749004100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:21:47.777618 containerd[1829]: time="2025-01-14T13:21:47.775198100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.779528 containerd[1829]: time="2025-01-14T13:21:47.779480400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:47.779528 containerd[1829]: time="2025-01-14T13:21:47.779518200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:21:47.779643 containerd[1829]: time="2025-01-14T13:21:47.779540300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:21:47.779733 containerd[1829]: time="2025-01-14T13:21:47.779709000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.779742500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.779846600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.779864900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.780119500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.780138100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.780155000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.780167500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.780496 containerd[1829]: time="2025-01-14T13:21:47.780263000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.781111 containerd[1829]: time="2025-01-14T13:21:47.781078700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:21:47.781338 containerd[1829]: time="2025-01-14T13:21:47.781307700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:21:47.781395 containerd[1829]: time="2025-01-14T13:21:47.781338100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:21:47.781463 containerd[1829]: time="2025-01-14T13:21:47.781442700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:21:47.781643 containerd[1829]: time="2025-01-14T13:21:47.781508300Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:21:47.795967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:47.800661 containerd[1829]: time="2025-01-14T13:21:47.800612700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:21:47.800756 containerd[1829]: time="2025-01-14T13:21:47.800697800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:21:47.800756 containerd[1829]: time="2025-01-14T13:21:47.800724400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:21:47.800756 containerd[1829]: time="2025-01-14T13:21:47.800746600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:21:47.800871 containerd[1829]: time="2025-01-14T13:21:47.800767600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:21:47.801013 containerd[1829]: time="2025-01-14T13:21:47.800986900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:21:47.801450 containerd[1829]: time="2025-01-14T13:21:47.801418000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801567200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801595600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801626600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801651400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801673600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801699100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801721100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801742800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801764400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.801795 containerd[1829]: time="2025-01-14T13:21:47.801795600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801812300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801838300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801865800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801885300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801904300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801922500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801942300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801960800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801979200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.801996700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.802017400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.802034100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.802058500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.802079300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802143 containerd[1829]: time="2025-01-14T13:21:47.802106200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802139900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802158900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802175600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802231200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802261400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802278100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802295100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802310600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802328100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802343500Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:21:47.802640 containerd[1829]: time="2025-01-14T13:21:47.802365200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:21:47.803064 containerd[1829]: time="2025-01-14T13:21:47.802763800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:21:47.803064 containerd[1829]: time="2025-01-14T13:21:47.802838400Z" level=info msg="Connect containerd service" Jan 14 13:21:47.803064 containerd[1829]: time="2025-01-14T13:21:47.802901800Z" level=info msg="using legacy CRI server" Jan 14 13:21:47.803064 containerd[1829]: time="2025-01-14T13:21:47.802913900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:21:47.803344 containerd[1829]: time="2025-01-14T13:21:47.803068200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:21:47.803901 containerd[1829]: time="2025-01-14T13:21:47.803736200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:21:47.804250 containerd[1829]: time="2025-01-14T13:21:47.804114600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:21:47.804250 containerd[1829]: time="2025-01-14T13:21:47.804159000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:21:47.804530 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804563000Z" level=info msg="Start subscribing containerd event" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804610600Z" level=info msg="Start recovering state" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804680400Z" level=info msg="Start event monitor" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804697200Z" level=info msg="Start snapshots syncer" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804710500Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804722600Z" level=info msg="Start streaming server" Jan 14 13:21:47.805119 containerd[1829]: time="2025-01-14T13:21:47.804886000Z" level=info msg="containerd successfully booted in 0.056773s" Jan 14 13:21:47.808124 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:21:47.812283 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:21:47.815761 systemd[1]: Startup finished in 935ms (firmware) + 32.840s (loader) + 13.392s (kernel) + 15.380s (userspace) = 1min 2.549s. Jan 14 13:21:48.262655 login[1963]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:48.266368 login[1964]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:21:48.275448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:21:48.282326 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:21:48.286521 systemd-logind[1807]: New session 1 of user core. Jan 14 13:21:48.290416 systemd-logind[1807]: New session 2 of user core. Jan 14 13:21:48.309027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:21:48.319179 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:21:48.329740 (systemd)[1994]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:21:48.526922 systemd[1994]: Queued start job for default target default.target. Jan 14 13:21:48.527427 systemd[1994]: Created slice app.slice - User Application Slice. Jan 14 13:21:48.527456 systemd[1994]: Reached target paths.target - Paths. Jan 14 13:21:48.527475 systemd[1994]: Reached target timers.target - Timers. Jan 14 13:21:48.533004 systemd[1994]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:21:48.543812 systemd[1994]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:21:48.545250 systemd[1994]: Reached target sockets.target - Sockets. Jan 14 13:21:48.545407 systemd[1994]: Reached target basic.target - Basic System. Jan 14 13:21:48.545548 systemd[1994]: Reached target default.target - Main User Target. Jan 14 13:21:48.545679 systemd[1994]: Startup finished in 208ms. Jan 14 13:21:48.545959 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:21:48.556454 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:21:48.557511 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:21:48.707461 kubelet[1981]: E0114 13:21:48.707377 1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:48.710316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:48.710663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:21:49.305002 waagent[1960]: 2025-01-14T13:21:49.304886Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:21:49.308077 waagent[1960]: 2025-01-14T13:21:49.308002Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:21:49.310364 waagent[1960]: 2025-01-14T13:21:49.310311Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:21:49.312793 waagent[1960]: 2025-01-14T13:21:49.312724Z INFO Daemon Daemon Run daemon Jan 14 13:21:49.315050 waagent[1960]: 2025-01-14T13:21:49.314943Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:21:49.319157 waagent[1960]: 2025-01-14T13:21:49.317356Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:21:49.321989 waagent[1960]: 2025-01-14T13:21:49.321941Z INFO Daemon Daemon Activate resource disk Jan 14 13:21:49.327297 waagent[1960]: 2025-01-14T13:21:49.327216Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:21:49.335697 waagent[1960]: 2025-01-14T13:21:49.335636Z INFO Daemon Daemon Found device: None Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.335861Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.336403Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.337640Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.338522Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.346678Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.348635Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.349343Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:21:49.364388 waagent[1960]: 2025-01-14T13:21:49.349782Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:21:49.478756 waagent[1960]: 2025-01-14T13:21:49.474026Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:21:49.486912 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:21:49.488822 waagent[1960]: 2025-01-14T13:21:49.488720Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:21:49.498491 waagent[1960]: 2025-01-14T13:21:49.489107Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:21:49.498491 waagent[1960]: 2025-01-14T13:21:49.490310Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:21:49.498491 waagent[1960]: 2025-01-14T13:21:49.490688Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:21:49.498491 waagent[1960]: 2025-01-14T13:21:49.491696Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:21:49.498491 waagent[1960]: 2025-01-14T13:21:49.492512Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:21:49.534462 waagent[1960]: 2025-01-14T13:21:49.534409Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:21:49.545523 waagent[1960]: 2025-01-14T13:21:49.534939Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:21:49.545523 waagent[1960]: 2025-01-14T13:21:49.538280Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:21:49.644702 waagent[1960]: 2025-01-14T13:21:49.644523Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:21:49.647817 waagent[1960]: 2025-01-14T13:21:49.647731Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:21:49.654238 waagent[1960]: 2025-01-14T13:21:49.654179Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:49.671886 waagent[1960]: 2025-01-14T13:21:49.671834Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.672536Z INFO Daemon Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.672795Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b2ad54f1-967c-47d9-a43a-31598720bb82 eTag: 17005858834206005187 source: Fabric] Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.673524Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.674253Z INFO Daemon Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.675141Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:49.689843 waagent[1960]: 2025-01-14T13:21:49.678999Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:21:49.752940 waagent[1960]: 2025-01-14T13:21:49.752853Z INFO Daemon Downloaded certificate {'thumbprint': '868C8F7417C526B1E910F1C5B4D43C1B630E7A35', 'hasPrivateKey': True} Jan 14 13:21:49.759712 waagent[1960]: 2025-01-14T13:21:49.753828Z INFO Daemon Fetch goal state completed Jan 14 13:21:49.761714 waagent[1960]: 2025-01-14T13:21:49.761663Z INFO Daemon Daemon Starting provisioning Jan 14 13:21:49.768792 waagent[1960]: 2025-01-14T13:21:49.761924Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:21:49.768792 waagent[1960]: 2025-01-14T13:21:49.762480Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-4236615464] Jan 14 13:21:49.795289 waagent[1960]: 2025-01-14T13:21:49.795200Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-4236615464] Jan 14 13:21:49.803476 waagent[1960]: 2025-01-14T13:21:49.795752Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:21:49.803476 waagent[1960]: 2025-01-14T13:21:49.796922Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:21:49.825066 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:21:49.825075 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:21:49.825126 systemd-networkd[1371]: eth0: DHCP lease lost Jan 14 13:21:49.826387 waagent[1960]: 2025-01-14T13:21:49.826310Z INFO Daemon Daemon Create user account if not exists Jan 14 13:21:49.831822 waagent[1960]: 2025-01-14T13:21:49.826664Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:21:49.831822 waagent[1960]: 2025-01-14T13:21:49.827568Z INFO Daemon Daemon Configure sudoer Jan 14 13:21:49.831822 waagent[1960]: 2025-01-14T13:21:49.828899Z INFO Daemon Daemon Configure sshd Jan 14 13:21:49.831822 waagent[1960]: 2025-01-14T13:21:49.829882Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:21:49.831822 waagent[1960]: 2025-01-14T13:21:49.830821Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:21:49.843924 systemd-networkd[1371]: eth0: DHCPv6 lease lost Jan 14 13:21:49.881848 systemd-networkd[1371]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:21:50.938376 waagent[1960]: 2025-01-14T13:21:50.938297Z INFO Daemon Daemon Provisioning complete Jan 14 13:21:50.947944 waagent[1960]: 2025-01-14T13:21:50.947889Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:21:50.957567 waagent[1960]: 2025-01-14T13:21:50.948188Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:21:50.957567 waagent[1960]: 2025-01-14T13:21:50.948690Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:21:51.077188 waagent[2050]: 2025-01-14T13:21:51.077073Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:21:51.077669 waagent[2050]: 2025-01-14T13:21:51.077260Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:21:51.077669 waagent[2050]: 2025-01-14T13:21:51.077343Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:21:51.137877 waagent[2050]: 2025-01-14T13:21:51.137750Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:21:51.138119 waagent[2050]: 2025-01-14T13:21:51.138070Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:51.138215 waagent[2050]: 2025-01-14T13:21:51.138169Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:51.146200 waagent[2050]: 2025-01-14T13:21:51.146129Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:21:51.151422 waagent[2050]: 2025-01-14T13:21:51.151368Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:21:51.151897 waagent[2050]: 2025-01-14T13:21:51.151842Z INFO ExtHandler Jan 14 13:21:51.151997 waagent[2050]: 2025-01-14T13:21:51.151936Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0990da83-d62c-439f-88b6-d6ff628f1481 eTag: 17005858834206005187 source: Fabric] Jan 14 13:21:51.152309 waagent[2050]: 2025-01-14T13:21:51.152257Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:21:51.152890 waagent[2050]: 2025-01-14T13:21:51.152834Z INFO ExtHandler Jan 14 13:21:51.152953 waagent[2050]: 2025-01-14T13:21:51.152922Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:21:51.156293 waagent[2050]: 2025-01-14T13:21:51.156250Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:21:51.215309 waagent[2050]: 2025-01-14T13:21:51.215169Z INFO ExtHandler Downloaded certificate {'thumbprint': '868C8F7417C526B1E910F1C5B4D43C1B630E7A35', 'hasPrivateKey': True} Jan 14 13:21:51.215843 waagent[2050]: 2025-01-14T13:21:51.215730Z INFO ExtHandler Fetch goal state completed Jan 14 13:21:51.228748 waagent[2050]: 2025-01-14T13:21:51.228677Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2050 Jan 14 13:21:51.228928 waagent[2050]: 2025-01-14T13:21:51.228878Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:21:51.230534 waagent[2050]: 2025-01-14T13:21:51.230473Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:21:51.230917 waagent[2050]: 2025-01-14T13:21:51.230865Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:21:51.250344 waagent[2050]: 2025-01-14T13:21:51.250301Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:21:51.250540 waagent[2050]: 2025-01-14T13:21:51.250494Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:21:51.257310 waagent[2050]: 2025-01-14T13:21:51.257263Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:21:51.264570 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit waagent.service)... Jan 14 13:21:51.264589 systemd[1]: Reloading... Jan 14 13:21:51.350804 zram_generator::config[2097]: No configuration found. Jan 14 13:21:51.472604 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:51.551880 systemd[1]: Reloading finished in 286 ms. Jan 14 13:21:51.575445 waagent[2050]: 2025-01-14T13:21:51.575008Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:21:51.583079 systemd[1]: Reloading requested from client PID 2159 ('systemctl') (unit waagent.service)... Jan 14 13:21:51.583094 systemd[1]: Reloading... Jan 14 13:21:51.649843 zram_generator::config[2189]: No configuration found. Jan 14 13:21:51.786234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:21:51.865685 systemd[1]: Reloading finished in 282 ms. Jan 14 13:21:51.889803 waagent[2050]: 2025-01-14T13:21:51.889365Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:21:51.891059 waagent[2050]: 2025-01-14T13:21:51.890101Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:21:52.494233 waagent[2050]: 2025-01-14T13:21:52.494113Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:21:52.495099 waagent[2050]: 2025-01-14T13:21:52.495022Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:21:52.496062 waagent[2050]: 2025-01-14T13:21:52.495982Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:21:52.496546 waagent[2050]: 2025-01-14T13:21:52.496472Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:21:52.496709 waagent[2050]: 2025-01-14T13:21:52.496643Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:52.496877 waagent[2050]: 2025-01-14T13:21:52.496819Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:21:52.497067 waagent[2050]: 2025-01-14T13:21:52.497015Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:52.497396 waagent[2050]: 2025-01-14T13:21:52.497344Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:21:52.497552 waagent[2050]: 2025-01-14T13:21:52.497499Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:21:52.497677 waagent[2050]: 2025-01-14T13:21:52.497593Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:21:52.498052 waagent[2050]: 2025-01-14T13:21:52.497999Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:21:52.500123 waagent[2050]: 2025-01-14T13:21:52.498685Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:21:52.500123 waagent[2050]: 2025-01-14T13:21:52.499164Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:21:52.500123 waagent[2050]: 2025-01-14T13:21:52.499441Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:21:52.500123 waagent[2050]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:21:52.500123 waagent[2050]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:21:52.500123 waagent[2050]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:21:52.500123 waagent[2050]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:52.500123 waagent[2050]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:52.500123 waagent[2050]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:21:52.500123 waagent[2050]: 2025-01-14T13:21:52.499565Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:21:52.501252 waagent[2050]: 2025-01-14T13:21:52.500704Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:21:52.501343 waagent[2050]: 2025-01-14T13:21:52.500638Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:21:52.501522 waagent[2050]: 2025-01-14T13:21:52.501478Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:21:52.508875 waagent[2050]: 2025-01-14T13:21:52.508824Z INFO ExtHandler ExtHandler Jan 14 13:21:52.509970 waagent[2050]: 2025-01-14T13:21:52.509922Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fefbef2d-055f-4610-b28f-0847eb8f004f correlation 5dc187d1-c9bb-4ee9-baf2-a69cacf9e19d created: 2025-01-14T13:20:31.642090Z] Jan 14 13:21:52.510559 waagent[2050]: 2025-01-14T13:21:52.510502Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:21:52.511428 waagent[2050]: 2025-01-14T13:21:52.511377Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 14 13:21:52.543637 waagent[2050]: 2025-01-14T13:21:52.543567Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1062ACDF-2A97-41DD-83D1-5BE5B018E74F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:21:52.576451 waagent[2050]: 2025-01-14T13:21:52.576360Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:21:52.576451 waagent[2050]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:21:52.576451 waagent[2050]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:21:52.576451 waagent[2050]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:7f:53:c0 brd ff:ff:ff:ff:ff:ff Jan 14 13:21:52.576451 waagent[2050]: 3: enP31329s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:7f:53:c0 brd ff:ff:ff:ff:ff:ff\ altname enP31329p0s2 Jan 14 13:21:52.576451 waagent[2050]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:21:52.576451 waagent[2050]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:21:52.576451 waagent[2050]: 2: eth0 inet 10.200.4.33/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:21:52.576451 waagent[2050]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:21:52.576451 waagent[2050]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:21:52.576451 waagent[2050]: 2: eth0 inet6 fe80::20d:3aff:fe7f:53c0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:52.576451 waagent[2050]: 3: enP31329s1 inet6 fe80::20d:3aff:fe7f:53c0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:21:52.610308 waagent[2050]: 2025-01-14T13:21:52.608528Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:21:52.610308 waagent[2050]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.610308 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.610308 waagent[2050]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.610308 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.610308 waagent[2050]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.610308 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.610308 waagent[2050]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:52.610308 waagent[2050]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:52.610308 waagent[2050]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:52.615041 waagent[2050]: 2025-01-14T13:21:52.614955Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:21:52.615041 waagent[2050]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.615041 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.615041 waagent[2050]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.615041 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.615041 waagent[2050]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:21:52.615041 waagent[2050]: pkts bytes target prot opt in out source destination Jan 14 13:21:52.615041 waagent[2050]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:21:52.615041 waagent[2050]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:21:52.615041 waagent[2050]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:21:52.615434 waagent[2050]: 2025-01-14T13:21:52.615339Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:21:58.909495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:21:58.916985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:21:59.032958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:21:59.036136 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:21:59.589143 kubelet[2298]: E0114 13:21:59.589076 2298 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:21:59.593806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:21:59.594141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:09.659743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:22:09.666023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:09.773995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:09.779331 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:09.970259 chronyd[1788]: Selected source PHC0 Jan 14 13:22:10.406072 kubelet[2320]: E0114 13:22:10.405929 2320 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:10.409565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:10.410070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:20.659591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:22:20.666350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:21.026977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:21.033131 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:21.291945 kubelet[2341]: E0114 13:22:21.291810 2341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:21.294961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:21.295316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:26.806664 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:22:31.409585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:22:31.416030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:31.775004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:31.775259 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:31.818275 kubelet[2361]: E0114 13:22:31.818215 2361 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:31.821205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:31.821520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:32.085756 update_engine[1809]: I20250114 13:22:32.085536 1809 update_attempter.cc:509] Updating boot flags... Jan 14 13:22:32.188825 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2386) Jan 14 13:22:32.323123 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2385) Jan 14 13:22:39.050701 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:22:39.063108 systemd[1]: Started sshd@0-10.200.4.33:22-10.200.16.10:58634.service - OpenSSH per-connection server daemon (10.200.16.10:58634). Jan 14 13:22:39.782208 sshd[2485]: Accepted publickey for core from 10.200.16.10 port 58634 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:39.784135 sshd-session[2485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:39.789002 systemd-logind[1807]: New session 3 of user core. Jan 14 13:22:39.799029 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:22:40.308068 systemd[1]: Started sshd@1-10.200.4.33:22-10.200.16.10:58644.service - OpenSSH per-connection server daemon (10.200.16.10:58644). Jan 14 13:22:40.909576 sshd[2490]: Accepted publickey for core from 10.200.16.10 port 58644 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:40.911407 sshd-session[2490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:40.921330 systemd-logind[1807]: New session 4 of user core. Jan 14 13:22:40.933076 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:22:41.339215 sshd[2493]: Connection closed by 10.200.16.10 port 58644 Jan 14 13:22:41.340440 sshd-session[2490]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:41.346514 systemd[1]: sshd@1-10.200.4.33:22-10.200.16.10:58644.service: Deactivated successfully. Jan 14 13:22:41.350758 systemd-logind[1807]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:22:41.352055 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:22:41.353877 systemd-logind[1807]: Removed session 4. Jan 14 13:22:41.442108 systemd[1]: Started sshd@2-10.200.4.33:22-10.200.16.10:58652.service - OpenSSH per-connection server daemon (10.200.16.10:58652). Jan 14 13:22:41.909608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:22:41.915031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:42.041241 sshd[2498]: Accepted publickey for core from 10.200.16.10 port 58652 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:42.043151 sshd-session[2498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:42.049606 systemd-logind[1807]: New session 5 of user core. Jan 14 13:22:42.053466 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:22:42.204034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:42.210123 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:42.266726 kubelet[2514]: E0114 13:22:42.266618 2514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:42.269838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:42.270167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:42.466563 sshd[2505]: Connection closed by 10.200.16.10 port 58652 Jan 14 13:22:42.467338 sshd-session[2498]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:42.470241 systemd[1]: sshd@2-10.200.4.33:22-10.200.16.10:58652.service: Deactivated successfully. Jan 14 13:22:42.474141 systemd-logind[1807]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:22:42.474943 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:22:42.476934 systemd-logind[1807]: Removed session 5. Jan 14 13:22:42.582110 systemd[1]: Started sshd@3-10.200.4.33:22-10.200.16.10:58666.service - OpenSSH per-connection server daemon (10.200.16.10:58666). Jan 14 13:22:43.181083 sshd[2527]: Accepted publickey for core from 10.200.16.10 port 58666 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:43.182519 sshd-session[2527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:43.186832 systemd-logind[1807]: New session 6 of user core. Jan 14 13:22:43.197088 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:22:43.610421 sshd[2530]: Connection closed by 10.200.16.10 port 58666 Jan 14 13:22:43.611699 sshd-session[2527]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:43.616375 systemd[1]: sshd@3-10.200.4.33:22-10.200.16.10:58666.service: Deactivated successfully. Jan 14 13:22:43.620673 systemd-logind[1807]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:22:43.621324 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:22:43.622347 systemd-logind[1807]: Removed session 6. Jan 14 13:22:43.720498 systemd[1]: Started sshd@4-10.200.4.33:22-10.200.16.10:58668.service - OpenSSH per-connection server daemon (10.200.16.10:58668). Jan 14 13:22:44.319400 sshd[2535]: Accepted publickey for core from 10.200.16.10 port 58668 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:44.325771 sshd-session[2535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:44.330033 systemd-logind[1807]: New session 7 of user core. Jan 14 13:22:44.338141 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:22:44.822941 sudo[2539]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:22:44.823314 sudo[2539]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:44.858593 sudo[2539]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:44.958325 sshd[2538]: Connection closed by 10.200.16.10 port 58668 Jan 14 13:22:44.959550 sshd-session[2535]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:44.964558 systemd[1]: sshd@4-10.200.4.33:22-10.200.16.10:58668.service: Deactivated successfully. Jan 14 13:22:44.968823 systemd-logind[1807]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:22:44.969142 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:22:44.970999 systemd-logind[1807]: Removed session 7. Jan 14 13:22:45.065344 systemd[1]: Started sshd@5-10.200.4.33:22-10.200.16.10:58680.service - OpenSSH per-connection server daemon (10.200.16.10:58680). Jan 14 13:22:45.665833 sshd[2544]: Accepted publickey for core from 10.200.16.10 port 58680 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:45.667612 sshd-session[2544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:45.672565 systemd-logind[1807]: New session 8 of user core. Jan 14 13:22:45.681141 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:22:45.997807 sudo[2549]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:22:45.998166 sudo[2549]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:46.001700 sudo[2549]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:46.010496 sudo[2548]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:22:46.010863 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:46.030358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:22:46.057820 augenrules[2571]: No rules Jan 14 13:22:46.059578 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:22:46.060909 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:22:46.063211 sudo[2548]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:46.162750 sshd[2547]: Connection closed by 10.200.16.10 port 58680 Jan 14 13:22:46.163613 sshd-session[2544]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:46.167328 systemd[1]: sshd@5-10.200.4.33:22-10.200.16.10:58680.service: Deactivated successfully. Jan 14 13:22:46.172702 systemd-logind[1807]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:22:46.173479 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:22:46.174640 systemd-logind[1807]: Removed session 8. Jan 14 13:22:46.276166 systemd[1]: Started sshd@6-10.200.4.33:22-10.200.16.10:46888.service - OpenSSH per-connection server daemon (10.200.16.10:46888). Jan 14 13:22:46.880691 sshd[2580]: Accepted publickey for core from 10.200.16.10 port 46888 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:46.882435 sshd-session[2580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:46.887567 systemd-logind[1807]: New session 9 of user core. Jan 14 13:22:46.893322 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:22:47.213531 sudo[2584]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:22:47.213902 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:22:48.525032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:48.531347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:48.565086 systemd[1]: Reloading requested from client PID 2623 ('systemctl') (unit session-9.scope)... Jan 14 13:22:48.565272 systemd[1]: Reloading... Jan 14 13:22:48.666805 zram_generator::config[2658]: No configuration found. Jan 14 13:22:48.825149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:48.902104 systemd[1]: Reloading finished in 336 ms. Jan 14 13:22:48.951219 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:22:48.951325 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:22:48.951673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:48.954065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:49.171987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:49.175735 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:22:49.220125 kubelet[2744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:22:49.220125 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:22:49.220125 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:22:49.220695 kubelet[2744]: I0114 13:22:49.220182 2744 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:22:49.840326 kubelet[2744]: I0114 13:22:49.840286 2744 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:22:49.840326 kubelet[2744]: I0114 13:22:49.840319 2744 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:22:49.840619 kubelet[2744]: I0114 13:22:49.840597 2744 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:22:49.860325 kubelet[2744]: I0114 13:22:49.860290 2744 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:22:49.872451 kubelet[2744]: I0114 13:22:49.872424 2744 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:22:49.873052 kubelet[2744]: I0114 13:22:49.872933 2744 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:22:49.873172 kubelet[2744]: I0114 13:22:49.873147 2744 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:22:49.873898 kubelet[2744]: I0114 13:22:49.873873 2744 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:22:49.873898 kubelet[2744]: I0114 13:22:49.873902 2744 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:22:49.874054 kubelet[2744]: I0114 13:22:49.874035 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:22:49.874164 kubelet[2744]: I0114 13:22:49.874151 2744 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:22:49.874214 kubelet[2744]: I0114 13:22:49.874173 2744 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:22:49.874214 kubelet[2744]: I0114 13:22:49.874204 2744 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:22:49.874290 kubelet[2744]: I0114 13:22:49.874219 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:22:49.874658 kubelet[2744]: E0114 13:22:49.874629 2744 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:49.875030 kubelet[2744]: E0114 13:22:49.874978 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:49.875597 kubelet[2744]: I0114 13:22:49.875577 2744 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:22:49.879370 kubelet[2744]: I0114 13:22:49.878860 2744 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:22:49.879370 kubelet[2744]: W0114 13:22:49.878938 2744 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:22:49.879645 kubelet[2744]: I0114 13:22:49.879609 2744 server.go:1256] "Started kubelet" Jan 14 13:22:49.879950 kubelet[2744]: I0114 13:22:49.879935 2744 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:22:49.881003 kubelet[2744]: I0114 13:22:49.880986 2744 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:22:49.883566 kubelet[2744]: I0114 13:22:49.883547 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:22:49.884910 kubelet[2744]: I0114 13:22:49.884894 2744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:22:49.885192 kubelet[2744]: I0114 13:22:49.885177 2744 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:22:49.887179 kubelet[2744]: W0114 13:22:49.887162 2744 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.4.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:22:49.887303 kubelet[2744]: E0114 13:22:49.887292 2744 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.4.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 14 13:22:49.887539 kubelet[2744]: W0114 13:22:49.887522 2744 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:22:49.887631 kubelet[2744]: E0114 13:22:49.887620 2744 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 14 13:22:49.890880 kubelet[2744]: E0114 13:22:49.890861 2744 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.4.33.181a91d98de9ab32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.4.33,UID:10.200.4.33,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.4.33,},FirstTimestamp:2025-01-14 13:22:49.879579442 +0000 UTC m=+0.699770604,LastTimestamp:2025-01-14 13:22:49.879579442 +0000 UTC m=+0.699770604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.4.33,}" Jan 14 13:22:49.893795 kubelet[2744]: I0114 13:22:49.893018 2744 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:22:49.897897 kubelet[2744]: I0114 13:22:49.897874 2744 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:22:49.897988 kubelet[2744]: I0114 13:22:49.897947 2744 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:22:49.898499 kubelet[2744]: E0114 13:22:49.898482 2744 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:22:49.899047 kubelet[2744]: I0114 13:22:49.899033 2744 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:22:49.899160 kubelet[2744]: I0114 13:22:49.899150 2744 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:22:49.899328 kubelet[2744]: I0114 13:22:49.899306 2744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:22:49.904530 kubelet[2744]: E0114 13:22:49.904509 2744 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.33\" not found" node="10.200.4.33" Jan 14 13:22:49.926353 kubelet[2744]: I0114 13:22:49.926333 2744 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:22:49.926458 kubelet[2744]: I0114 13:22:49.926427 2744 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:22:49.926458 kubelet[2744]: I0114 13:22:49.926447 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:22:49.931312 kubelet[2744]: I0114 13:22:49.931284 2744 policy_none.go:49] "None policy: Start" Jan 14 13:22:49.931861 kubelet[2744]: I0114 13:22:49.931834 2744 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:22:49.931861 kubelet[2744]: I0114 13:22:49.931862 2744 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:22:49.944008 kubelet[2744]: I0114 13:22:49.943387 2744 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:22:49.944008 kubelet[2744]: I0114 13:22:49.943658 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:22:49.944721 kubelet[2744]: I0114 13:22:49.944700 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:22:49.946050 kubelet[2744]: I0114 13:22:49.946036 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:22:49.946132 kubelet[2744]: I0114 13:22:49.946125 2744 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:22:49.946182 kubelet[2744]: I0114 13:22:49.946177 2744 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:22:49.946302 kubelet[2744]: E0114 13:22:49.946294 2744 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 14 13:22:49.955261 kubelet[2744]: E0114 13:22:49.955248 2744 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.33\" not found" Jan 14 13:22:49.994048 kubelet[2744]: I0114 13:22:49.994012 2744 kubelet_node_status.go:73] "Attempting to register node" node="10.200.4.33" Jan 14 13:22:50.032597 kubelet[2744]: I0114 13:22:50.032547 2744 kubelet_node_status.go:76] "Successfully registered node" node="10.200.4.33" Jan 14 13:22:50.181889 kubelet[2744]: E0114 13:22:50.181830 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.282312 kubelet[2744]: E0114 13:22:50.282242 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.383087 kubelet[2744]: E0114 13:22:50.383018 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.483890 kubelet[2744]: E0114 13:22:50.483729 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.584430 kubelet[2744]: E0114 13:22:50.584368 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.687525 kubelet[2744]: E0114 13:22:50.686860 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.698000 sudo[2584]: pam_unix(sudo:session): session closed for user root Jan 14 13:22:50.787195 kubelet[2744]: E0114 13:22:50.787059 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.793750 sshd[2583]: Connection closed by 10.200.16.10 port 46888 Jan 14 13:22:50.794633 sshd-session[2580]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:50.798457 systemd[1]: sshd@6-10.200.4.33:22-10.200.16.10:46888.service: Deactivated successfully. Jan 14 13:22:50.803175 systemd-logind[1807]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:22:50.803882 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:22:50.805197 systemd-logind[1807]: Removed session 9. Jan 14 13:22:50.851843 kubelet[2744]: I0114 13:22:50.851770 2744 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 14 13:22:50.852198 kubelet[2744]: W0114 13:22:50.852175 2744 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:22:50.852286 kubelet[2744]: W0114 13:22:50.852179 2744 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 14 13:22:50.875244 kubelet[2744]: E0114 13:22:50.875208 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:50.887374 kubelet[2744]: E0114 13:22:50.887335 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:50.987979 kubelet[2744]: E0114 13:22:50.987932 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:51.088889 kubelet[2744]: E0114 13:22:51.088608 2744 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.4.33\" not found" Jan 14 13:22:51.190091 kubelet[2744]: I0114 13:22:51.190052 2744 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 14 13:22:51.190471 containerd[1829]: time="2025-01-14T13:22:51.190430782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:22:51.191131 kubelet[2744]: I0114 13:22:51.190693 2744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 14 13:22:51.875740 kubelet[2744]: E0114 13:22:51.875650 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:51.875740 kubelet[2744]: I0114 13:22:51.875653 2744 apiserver.go:52] "Watching apiserver" Jan 14 13:22:51.881256 kubelet[2744]: I0114 13:22:51.881218 2744 topology_manager.go:215] "Topology Admit Handler" podUID="9859f1f6-c03e-4a08-9b5a-257d230864ac" podNamespace="calico-system" podName="calico-node-9cv7l" Jan 14 13:22:51.881383 kubelet[2744]: I0114 13:22:51.881345 2744 topology_manager.go:215] "Topology Admit Handler" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" podNamespace="calico-system" podName="csi-node-driver-hq5z2" Jan 14 13:22:51.881427 kubelet[2744]: I0114 13:22:51.881406 2744 topology_manager.go:215] "Topology Admit Handler" podUID="230ad2d4-f03e-40d3-a79e-6b5854b7d95b" podNamespace="kube-system" podName="kube-proxy-g9kxh" Jan 14 13:22:51.883352 kubelet[2744]: E0114 13:22:51.881638 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:22:51.899212 kubelet[2744]: I0114 13:22:51.899183 2744 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:22:51.911619 kubelet[2744]: I0114 13:22:51.911594 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2lg\" (UniqueName: \"kubernetes.io/projected/a70e7d33-f96f-4604-b940-93eea95840a3-kube-api-access-9j2lg\") pod \"csi-node-driver-hq5z2\" (UID: \"a70e7d33-f96f-4604-b940-93eea95840a3\") " pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:22:51.911759 kubelet[2744]: I0114 13:22:51.911632 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/230ad2d4-f03e-40d3-a79e-6b5854b7d95b-lib-modules\") pod \"kube-proxy-g9kxh\" (UID: \"230ad2d4-f03e-40d3-a79e-6b5854b7d95b\") " pod="kube-system/kube-proxy-g9kxh" Jan 14 13:22:51.911759 kubelet[2744]: I0114 13:22:51.911660 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9859f1f6-c03e-4a08-9b5a-257d230864ac-tigera-ca-bundle\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.911759 kubelet[2744]: I0114 13:22:51.911683 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/230ad2d4-f03e-40d3-a79e-6b5854b7d95b-kube-proxy\") pod \"kube-proxy-g9kxh\" (UID: \"230ad2d4-f03e-40d3-a79e-6b5854b7d95b\") " pod="kube-system/kube-proxy-g9kxh" Jan 14 13:22:51.911759 kubelet[2744]: I0114 13:22:51.911707 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-xtables-lock\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.911759 kubelet[2744]: I0114 13:22:51.911731 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-var-lib-calico\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.911994 kubelet[2744]: I0114 13:22:51.911754 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-cni-bin-dir\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.911994 kubelet[2744]: I0114 13:22:51.911796 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a70e7d33-f96f-4604-b940-93eea95840a3-varrun\") pod \"csi-node-driver-hq5z2\" (UID: \"a70e7d33-f96f-4604-b940-93eea95840a3\") " pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:22:51.911994 kubelet[2744]: I0114 13:22:51.911828 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a70e7d33-f96f-4604-b940-93eea95840a3-registration-dir\") pod \"csi-node-driver-hq5z2\" (UID: \"a70e7d33-f96f-4604-b940-93eea95840a3\") " pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:22:51.911994 kubelet[2744]: I0114 13:22:51.911857 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-lib-modules\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.911994 kubelet[2744]: I0114 13:22:51.911883 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-policysync\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912189 kubelet[2744]: I0114 13:22:51.911921 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-var-run-calico\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912189 kubelet[2744]: I0114 13:22:51.911950 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-cni-net-dir\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912189 kubelet[2744]: I0114 13:22:51.911979 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-flexvol-driver-host\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912189 kubelet[2744]: I0114 13:22:51.912008 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dkj\" (UniqueName: \"kubernetes.io/projected/9859f1f6-c03e-4a08-9b5a-257d230864ac-kube-api-access-d6dkj\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912189 kubelet[2744]: I0114 13:22:51.912037 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a70e7d33-f96f-4604-b940-93eea95840a3-kubelet-dir\") pod \"csi-node-driver-hq5z2\" (UID: \"a70e7d33-f96f-4604-b940-93eea95840a3\") " pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:22:51.912380 kubelet[2744]: I0114 13:22:51.912065 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a70e7d33-f96f-4604-b940-93eea95840a3-socket-dir\") pod \"csi-node-driver-hq5z2\" (UID: \"a70e7d33-f96f-4604-b940-93eea95840a3\") " pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:22:51.912380 kubelet[2744]: I0114 13:22:51.912093 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/230ad2d4-f03e-40d3-a79e-6b5854b7d95b-xtables-lock\") pod \"kube-proxy-g9kxh\" (UID: \"230ad2d4-f03e-40d3-a79e-6b5854b7d95b\") " pod="kube-system/kube-proxy-g9kxh" Jan 14 13:22:51.912380 kubelet[2744]: I0114 13:22:51.912124 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkk5b\" (UniqueName: \"kubernetes.io/projected/230ad2d4-f03e-40d3-a79e-6b5854b7d95b-kube-api-access-lkk5b\") pod \"kube-proxy-g9kxh\" (UID: \"230ad2d4-f03e-40d3-a79e-6b5854b7d95b\") " pod="kube-system/kube-proxy-g9kxh" Jan 14 13:22:51.912380 kubelet[2744]: I0114 13:22:51.912151 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9859f1f6-c03e-4a08-9b5a-257d230864ac-node-certs\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:51.912380 kubelet[2744]: I0114 13:22:51.912179 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9859f1f6-c03e-4a08-9b5a-257d230864ac-cni-log-dir\") pod \"calico-node-9cv7l\" (UID: \"9859f1f6-c03e-4a08-9b5a-257d230864ac\") " pod="calico-system/calico-node-9cv7l" Jan 14 13:22:52.016160 kubelet[2744]: E0114 13:22:52.016126 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.016160 kubelet[2744]: W0114 13:22:52.016150 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.016480 kubelet[2744]: E0114 13:22:52.016183 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.016480 kubelet[2744]: E0114 13:22:52.016414 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.016480 kubelet[2744]: W0114 13:22:52.016424 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.016480 kubelet[2744]: E0114 13:22:52.016458 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.016697 kubelet[2744]: E0114 13:22:52.016682 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.016697 kubelet[2744]: W0114 13:22:52.016693 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.016818 kubelet[2744]: E0114 13:22:52.016725 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.017481 kubelet[2744]: E0114 13:22:52.016991 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.017481 kubelet[2744]: W0114 13:22:52.017005 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.017481 kubelet[2744]: E0114 13:22:52.017023 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.017481 kubelet[2744]: E0114 13:22:52.017252 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.017481 kubelet[2744]: W0114 13:22:52.017263 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.018115 kubelet[2744]: E0114 13:22:52.017910 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.018201 kubelet[2744]: E0114 13:22:52.018131 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.018201 kubelet[2744]: W0114 13:22:52.018143 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.018326 kubelet[2744]: E0114 13:22:52.018234 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.019188 kubelet[2744]: E0114 13:22:52.018985 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.019188 kubelet[2744]: W0114 13:22:52.019007 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.022887 kubelet[2744]: E0114 13:22:52.022843 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.022887 kubelet[2744]: W0114 13:22:52.022860 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.023075 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.026924 kubelet[2744]: W0114 13:22:52.023089 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.023393 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.026924 kubelet[2744]: W0114 13:22:52.023404 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.023971 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.026924 kubelet[2744]: W0114 13:22:52.023983 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.023999 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.024043 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.024911 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.026924 kubelet[2744]: E0114 13:22:52.024942 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.024989 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.025040 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027345 kubelet[2744]: W0114 13:22:52.025048 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.025071 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.025382 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027345 kubelet[2744]: W0114 13:22:52.025470 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.025487 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.025844 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027345 kubelet[2744]: W0114 13:22:52.025856 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027345 kubelet[2744]: E0114 13:22:52.026103 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027721 kubelet[2744]: W0114 13:22:52.026115 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026130 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026264 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026422 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027721 kubelet[2744]: W0114 13:22:52.026431 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026647 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.027721 kubelet[2744]: W0114 13:22:52.026656 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026672 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.027721 kubelet[2744]: E0114 13:22:52.026884 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.031246 kubelet[2744]: E0114 13:22:52.030932 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.031246 kubelet[2744]: W0114 13:22:52.030946 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.031246 kubelet[2744]: E0114 13:22:52.030968 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.031246 kubelet[2744]: E0114 13:22:52.031173 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.031246 kubelet[2744]: W0114 13:22:52.031185 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.031246 kubelet[2744]: E0114 13:22:52.031201 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.043834 kubelet[2744]: E0114 13:22:52.042526 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:52.043834 kubelet[2744]: W0114 13:22:52.042544 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:52.043834 kubelet[2744]: E0114 13:22:52.042561 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:52.187494 containerd[1829]: time="2025-01-14T13:22:52.187426023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9cv7l,Uid:9859f1f6-c03e-4a08-9b5a-257d230864ac,Namespace:calico-system,Attempt:0,}" Jan 14 13:22:52.187898 containerd[1829]: time="2025-01-14T13:22:52.187442223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9kxh,Uid:230ad2d4-f03e-40d3-a79e-6b5854b7d95b,Namespace:kube-system,Attempt:0,}" Jan 14 13:22:52.876366 kubelet[2744]: E0114 13:22:52.876308 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:52.949102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001207858.mount: Deactivated successfully. Jan 14 13:22:52.972625 containerd[1829]: time="2025-01-14T13:22:52.972579399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:52.983212 containerd[1829]: time="2025-01-14T13:22:52.983131666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:22:52.985834 containerd[1829]: time="2025-01-14T13:22:52.985759908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:52.988562 containerd[1829]: time="2025-01-14T13:22:52.988524752Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:52.991497 containerd[1829]: time="2025-01-14T13:22:52.991447498Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:22:52.994428 containerd[1829]: time="2025-01-14T13:22:52.994377445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:22:52.995581 containerd[1829]: time="2025-01-14T13:22:52.995290760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 807.48473ms" Jan 14 13:22:53.003141 containerd[1829]: time="2025-01-14T13:22:53.003109984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 815.556459ms" Jan 14 13:22:53.729310 containerd[1829]: time="2025-01-14T13:22:53.726007670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:53.729310 containerd[1829]: time="2025-01-14T13:22:53.729176921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:53.729310 containerd[1829]: time="2025-01-14T13:22:53.729204121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:53.729933 containerd[1829]: time="2025-01-14T13:22:53.729558927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:53.729933 containerd[1829]: time="2025-01-14T13:22:53.729894632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:53.730061 containerd[1829]: time="2025-01-14T13:22:53.729930333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:53.730061 containerd[1829]: time="2025-01-14T13:22:53.729649728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:53.730219 containerd[1829]: time="2025-01-14T13:22:53.730082535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:53.876662 kubelet[2744]: E0114 13:22:53.876607 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:53.948058 kubelet[2744]: E0114 13:22:53.947600 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:22:54.234169 systemd[1]: run-containerd-runc-k8s.io-374b727ed26da4a1f3731837fb171cb188a99b42e37d9573a8c0419ee8c386f8-runc.Ktj3Lw.mount: Deactivated successfully. Jan 14 13:22:54.274426 containerd[1829]: time="2025-01-14T13:22:54.274064978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9kxh,Uid:230ad2d4-f03e-40d3-a79e-6b5854b7d95b,Namespace:kube-system,Attempt:0,} returns sandbox id \"374b727ed26da4a1f3731837fb171cb188a99b42e37d9573a8c0419ee8c386f8\"" Jan 14 13:22:54.277542 containerd[1829]: time="2025-01-14T13:22:54.277460332Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 13:22:54.277970 containerd[1829]: time="2025-01-14T13:22:54.277935640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9cv7l,Uid:9859f1f6-c03e-4a08-9b5a-257d230864ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\"" Jan 14 13:22:54.877727 kubelet[2744]: E0114 13:22:54.877601 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:55.530316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295138823.mount: Deactivated successfully. Jan 14 13:22:55.878683 kubelet[2744]: E0114 13:22:55.878549 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:55.948713 kubelet[2744]: E0114 13:22:55.948512 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:22:55.989464 containerd[1829]: time="2025-01-14T13:22:55.989413187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:55.991549 containerd[1829]: time="2025-01-14T13:22:55.991487934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 13:22:55.994061 containerd[1829]: time="2025-01-14T13:22:55.994012290Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:55.998696 containerd[1829]: time="2025-01-14T13:22:55.997962079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:55.998696 containerd[1829]: time="2025-01-14T13:22:55.998533392Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.721035659s" Jan 14 13:22:55.998696 containerd[1829]: time="2025-01-14T13:22:55.998564693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 13:22:55.999880 containerd[1829]: time="2025-01-14T13:22:55.999855122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 14 13:22:56.000957 containerd[1829]: time="2025-01-14T13:22:56.000927246Z" level=info msg="CreateContainer within sandbox \"374b727ed26da4a1f3731837fb171cb188a99b42e37d9573a8c0419ee8c386f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:22:56.028494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941616315.mount: Deactivated successfully. Jan 14 13:22:56.035999 containerd[1829]: time="2025-01-14T13:22:56.035961133Z" level=info msg="CreateContainer within sandbox \"374b727ed26da4a1f3731837fb171cb188a99b42e37d9573a8c0419ee8c386f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e5c63efe7ed940c2638d100200489c7980a974b4c0de3434124e5a647d946367\"" Jan 14 13:22:56.036698 containerd[1829]: time="2025-01-14T13:22:56.036621248Z" level=info msg="StartContainer for \"e5c63efe7ed940c2638d100200489c7980a974b4c0de3434124e5a647d946367\"" Jan 14 13:22:56.101088 containerd[1829]: time="2025-01-14T13:22:56.101044396Z" level=info msg="StartContainer for \"e5c63efe7ed940c2638d100200489c7980a974b4c0de3434124e5a647d946367\" returns successfully" Jan 14 13:22:56.879448 kubelet[2744]: E0114 13:22:56.879404 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:56.982331 kubelet[2744]: I0114 13:22:56.982110 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-g9kxh" podStartSLOduration=5.25967551 podStartE2EDuration="6.982039694s" podCreationTimestamp="2025-01-14 13:22:50 +0000 UTC" firstStartedPulling="2025-01-14 13:22:54.276839423 +0000 UTC m=+5.097030585" lastFinishedPulling="2025-01-14 13:22:55.999203607 +0000 UTC m=+6.819394769" observedRunningTime="2025-01-14 13:22:56.981693386 +0000 UTC m=+7.801884648" watchObservedRunningTime="2025-01-14 13:22:56.982039694 +0000 UTC m=+7.802230956" Jan 14 13:22:57.038294 kubelet[2744]: E0114 13:22:57.038249 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.038294 kubelet[2744]: W0114 13:22:57.038281 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.038570 kubelet[2744]: E0114 13:22:57.038317 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.038665 kubelet[2744]: E0114 13:22:57.038641 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.038665 kubelet[2744]: W0114 13:22:57.038660 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.038838 kubelet[2744]: E0114 13:22:57.038682 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.038967 kubelet[2744]: E0114 13:22:57.038944 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.038967 kubelet[2744]: W0114 13:22:57.038963 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.039111 kubelet[2744]: E0114 13:22:57.038984 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.039231 kubelet[2744]: E0114 13:22:57.039214 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.039231 kubelet[2744]: W0114 13:22:57.039228 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.039351 kubelet[2744]: E0114 13:22:57.039248 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.039494 kubelet[2744]: E0114 13:22:57.039477 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.039494 kubelet[2744]: W0114 13:22:57.039492 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.039631 kubelet[2744]: E0114 13:22:57.039510 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.039743 kubelet[2744]: E0114 13:22:57.039727 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.039743 kubelet[2744]: W0114 13:22:57.039741 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.039903 kubelet[2744]: E0114 13:22:57.039760 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.040007 kubelet[2744]: E0114 13:22:57.039986 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.040007 kubelet[2744]: W0114 13:22:57.040003 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.040147 kubelet[2744]: E0114 13:22:57.040022 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.040259 kubelet[2744]: E0114 13:22:57.040242 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.040259 kubelet[2744]: W0114 13:22:57.040256 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.040389 kubelet[2744]: E0114 13:22:57.040273 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.040513 kubelet[2744]: E0114 13:22:57.040497 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.040513 kubelet[2744]: W0114 13:22:57.040510 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.040652 kubelet[2744]: E0114 13:22:57.040528 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.040755 kubelet[2744]: E0114 13:22:57.040738 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.040755 kubelet[2744]: W0114 13:22:57.040751 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.040913 kubelet[2744]: E0114 13:22:57.040769 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.041024 kubelet[2744]: E0114 13:22:57.041008 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.041024 kubelet[2744]: W0114 13:22:57.041021 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.041166 kubelet[2744]: E0114 13:22:57.041040 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.041270 kubelet[2744]: E0114 13:22:57.041251 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.041270 kubelet[2744]: W0114 13:22:57.041266 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.041384 kubelet[2744]: E0114 13:22:57.041288 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.041525 kubelet[2744]: E0114 13:22:57.041508 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.041525 kubelet[2744]: W0114 13:22:57.041522 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.041664 kubelet[2744]: E0114 13:22:57.041541 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.041809 kubelet[2744]: E0114 13:22:57.041757 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.041891 kubelet[2744]: W0114 13:22:57.041830 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.041891 kubelet[2744]: E0114 13:22:57.041855 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.042170 kubelet[2744]: E0114 13:22:57.042149 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.042170 kubelet[2744]: W0114 13:22:57.042165 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.042331 kubelet[2744]: E0114 13:22:57.042187 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.042451 kubelet[2744]: E0114 13:22:57.042434 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.042451 kubelet[2744]: W0114 13:22:57.042447 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.042560 kubelet[2744]: E0114 13:22:57.042467 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.042700 kubelet[2744]: E0114 13:22:57.042684 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.042700 kubelet[2744]: W0114 13:22:57.042697 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.042864 kubelet[2744]: E0114 13:22:57.042716 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.042995 kubelet[2744]: E0114 13:22:57.042966 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.042995 kubelet[2744]: W0114 13:22:57.042989 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.043126 kubelet[2744]: E0114 13:22:57.043009 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.043238 kubelet[2744]: E0114 13:22:57.043221 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.043238 kubelet[2744]: W0114 13:22:57.043236 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.043384 kubelet[2744]: E0114 13:22:57.043254 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.043487 kubelet[2744]: E0114 13:22:57.043466 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.043487 kubelet[2744]: W0114 13:22:57.043482 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.043581 kubelet[2744]: E0114 13:22:57.043501 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.046849 kubelet[2744]: E0114 13:22:57.046832 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.046849 kubelet[2744]: W0114 13:22:57.046845 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.047119 kubelet[2744]: E0114 13:22:57.046862 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.047119 kubelet[2744]: E0114 13:22:57.047098 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.047119 kubelet[2744]: W0114 13:22:57.047109 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.047119 kubelet[2744]: E0114 13:22:57.047131 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.047354 kubelet[2744]: E0114 13:22:57.047345 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.047406 kubelet[2744]: W0114 13:22:57.047357 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.047406 kubelet[2744]: E0114 13:22:57.047378 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.047580 kubelet[2744]: E0114 13:22:57.047563 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.047580 kubelet[2744]: W0114 13:22:57.047575 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.047721 kubelet[2744]: E0114 13:22:57.047595 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.047792 kubelet[2744]: E0114 13:22:57.047765 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.047792 kubelet[2744]: W0114 13:22:57.047789 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.047907 kubelet[2744]: E0114 13:22:57.047812 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.048035 kubelet[2744]: E0114 13:22:57.048021 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.048035 kubelet[2744]: W0114 13:22:57.048033 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.048122 kubelet[2744]: E0114 13:22:57.048060 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.048376 kubelet[2744]: E0114 13:22:57.048295 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.048376 kubelet[2744]: W0114 13:22:57.048310 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.048376 kubelet[2744]: E0114 13:22:57.048331 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.048534 kubelet[2744]: E0114 13:22:57.048516 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.048534 kubelet[2744]: W0114 13:22:57.048526 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.048610 kubelet[2744]: E0114 13:22:57.048548 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.048764 kubelet[2744]: E0114 13:22:57.048749 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.048764 kubelet[2744]: W0114 13:22:57.048761 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.048891 kubelet[2744]: E0114 13:22:57.048794 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.049052 kubelet[2744]: E0114 13:22:57.049037 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.049052 kubelet[2744]: W0114 13:22:57.049050 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.049146 kubelet[2744]: E0114 13:22:57.049080 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.049447 kubelet[2744]: E0114 13:22:57.049431 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.049447 kubelet[2744]: W0114 13:22:57.049443 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.049565 kubelet[2744]: E0114 13:22:57.049463 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.049697 kubelet[2744]: E0114 13:22:57.049682 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:22:57.049697 kubelet[2744]: W0114 13:22:57.049695 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:22:57.049803 kubelet[2744]: E0114 13:22:57.049710 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:22:57.240895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963819570.mount: Deactivated successfully. Jan 14 13:22:57.386276 containerd[1829]: time="2025-01-14T13:22:57.385635464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:57.388964 containerd[1829]: time="2025-01-14T13:22:57.388916038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 14 13:22:57.391714 containerd[1829]: time="2025-01-14T13:22:57.391663299Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:57.395163 containerd[1829]: time="2025-01-14T13:22:57.395110077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:22:57.395888 containerd[1829]: time="2025-01-14T13:22:57.395726291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.395833268s" Jan 14 13:22:57.395888 containerd[1829]: time="2025-01-14T13:22:57.395763692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 14 13:22:57.398008 containerd[1829]: time="2025-01-14T13:22:57.397978141Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:22:57.432598 containerd[1829]: time="2025-01-14T13:22:57.432545218Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262\"" Jan 14 13:22:57.433272 containerd[1829]: time="2025-01-14T13:22:57.433244334Z" level=info msg="StartContainer for \"4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262\"" Jan 14 13:22:57.506700 containerd[1829]: time="2025-01-14T13:22:57.505889266Z" level=info msg="StartContainer for \"4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262\" returns successfully" Jan 14 13:22:58.415971 kubelet[2744]: E0114 13:22:57.880071 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:58.415971 kubelet[2744]: E0114 13:22:57.947312 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:22:58.203955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262-rootfs.mount: Deactivated successfully. Jan 14 13:22:58.434983 containerd[1829]: time="2025-01-14T13:22:58.434908640Z" level=info msg="shim disconnected" id=4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262 namespace=k8s.io Jan 14 13:22:58.434983 containerd[1829]: time="2025-01-14T13:22:58.434976142Z" level=warning msg="cleaning up after shim disconnected" id=4f97b1a4ca97d6e9286bb2152fa4b75229abcc9d9db605d4f695461f6a437262 namespace=k8s.io Jan 14 13:22:58.434983 containerd[1829]: time="2025-01-14T13:22:58.434986542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:58.881103 kubelet[2744]: E0114 13:22:58.880950 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:58.977110 containerd[1829]: time="2025-01-14T13:22:58.977060112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 14 13:22:59.881208 kubelet[2744]: E0114 13:22:59.881164 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:22:59.947048 kubelet[2744]: E0114 13:22:59.947006 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:00.881376 kubelet[2744]: E0114 13:23:00.881328 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:01.882654 kubelet[2744]: E0114 13:23:01.882588 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:01.948654 kubelet[2744]: E0114 13:23:01.948103 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:02.878416 containerd[1829]: time="2025-01-14T13:23:02.878358856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:02.880267 containerd[1829]: time="2025-01-14T13:23:02.880210595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 14 13:23:02.882337 containerd[1829]: time="2025-01-14T13:23:02.882275740Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:02.882878 kubelet[2744]: E0114 13:23:02.882787 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:02.885744 containerd[1829]: time="2025-01-14T13:23:02.885690614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:02.886903 containerd[1829]: time="2025-01-14T13:23:02.886351228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.909240515s" Jan 14 13:23:02.886903 containerd[1829]: time="2025-01-14T13:23:02.886386629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 14 13:23:02.888246 containerd[1829]: time="2025-01-14T13:23:02.888219668Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:23:02.922977 containerd[1829]: time="2025-01-14T13:23:02.922926516Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d\"" Jan 14 13:23:02.923701 containerd[1829]: time="2025-01-14T13:23:02.923674532Z" level=info msg="StartContainer for \"cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d\"" Jan 14 13:23:02.992165 containerd[1829]: time="2025-01-14T13:23:02.991870602Z" level=info msg="StartContainer for \"cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d\" returns successfully" Jan 14 13:23:03.884012 kubelet[2744]: E0114 13:23:03.883955 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:03.947076 kubelet[2744]: E0114 13:23:03.946640 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:04.393883 containerd[1829]: time="2025-01-14T13:23:04.393642820Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:23:04.421680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d-rootfs.mount: Deactivated successfully. Jan 14 13:23:04.458803 kubelet[2744]: I0114 13:23:04.458592 2744 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:23:04.884760 kubelet[2744]: E0114 13:23:04.884696 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:06.062707 kubelet[2744]: E0114 13:23:05.885904 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:06.066570 containerd[1829]: time="2025-01-14T13:23:06.066525474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:0,}" Jan 14 13:23:06.085462 containerd[1829]: time="2025-01-14T13:23:06.085340169Z" level=info msg="shim disconnected" id=cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d namespace=k8s.io Jan 14 13:23:06.085462 containerd[1829]: time="2025-01-14T13:23:06.085401570Z" level=warning msg="cleaning up after shim disconnected" id=cf007e1871ba0f8884db3bbf2829ea5344e2259372f4f79cd38695ec9a6fc60d namespace=k8s.io Jan 14 13:23:06.085462 containerd[1829]: time="2025-01-14T13:23:06.085414471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:06.154813 containerd[1829]: time="2025-01-14T13:23:06.152330775Z" level=error msg="Failed to destroy network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:06.155381 containerd[1829]: time="2025-01-14T13:23:06.155208535Z" level=error msg="encountered an error cleaning up failed sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:06.155381 containerd[1829]: time="2025-01-14T13:23:06.155278336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:06.155965 kubelet[2744]: E0114 13:23:06.155596 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:06.155965 kubelet[2744]: E0114 13:23:06.155661 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:06.155965 kubelet[2744]: E0114 13:23:06.155697 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:06.156195 kubelet[2744]: E0114 13:23:06.155749 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:06.156402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16-shm.mount: Deactivated successfully. Jan 14 13:23:06.886254 kubelet[2744]: E0114 13:23:06.886195 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:07.002475 kubelet[2744]: I0114 13:23:07.002439 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16" Jan 14 13:23:07.003804 containerd[1829]: time="2025-01-14T13:23:07.003320328Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:07.003804 containerd[1829]: time="2025-01-14T13:23:07.003606734Z" level=info msg="Ensure that sandbox b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16 in task-service has been cleanup successfully" Jan 14 13:23:07.004068 containerd[1829]: time="2025-01-14T13:23:07.004038143Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:07.004157 containerd[1829]: time="2025-01-14T13:23:07.004142646Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:07.006792 containerd[1829]: time="2025-01-14T13:23:07.005977184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:1,}" Jan 14 13:23:07.008070 systemd[1]: run-netns-cni\x2d3bd2514c\x2d16f9\x2d3954\x2dbd03\x2df54bed525a13.mount: Deactivated successfully. Jan 14 13:23:07.010577 containerd[1829]: time="2025-01-14T13:23:07.010499379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 14 13:23:07.092285 containerd[1829]: time="2025-01-14T13:23:07.092224794Z" level=error msg="Failed to destroy network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:07.095157 containerd[1829]: time="2025-01-14T13:23:07.095090354Z" level=error msg="encountered an error cleaning up failed sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:07.095292 containerd[1829]: time="2025-01-14T13:23:07.095192456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:07.095810 kubelet[2744]: E0114 13:23:07.095443 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:07.095810 kubelet[2744]: E0114 13:23:07.095506 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:07.095810 kubelet[2744]: E0114 13:23:07.095535 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:07.096309 kubelet[2744]: E0114 13:23:07.095598 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:07.099984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0-shm.mount: Deactivated successfully. Jan 14 13:23:07.886949 kubelet[2744]: E0114 13:23:07.886889 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:08.015286 kubelet[2744]: I0114 13:23:08.015245 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0" Jan 14 13:23:08.016807 containerd[1829]: time="2025-01-14T13:23:08.015847171Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:08.016807 containerd[1829]: time="2025-01-14T13:23:08.016191879Z" level=info msg="Ensure that sandbox 75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0 in task-service has been cleanup successfully" Jan 14 13:23:08.016807 containerd[1829]: time="2025-01-14T13:23:08.016624288Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:08.016807 containerd[1829]: time="2025-01-14T13:23:08.016650088Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:08.019206 containerd[1829]: time="2025-01-14T13:23:08.019051039Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:08.019206 containerd[1829]: time="2025-01-14T13:23:08.019145241Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:08.019206 containerd[1829]: time="2025-01-14T13:23:08.019159641Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:08.019647 containerd[1829]: time="2025-01-14T13:23:08.019616250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:2,}" Jan 14 13:23:08.021399 systemd[1]: run-netns-cni\x2df46a666c\x2dad00\x2dd67c\x2d5218\x2d16100c7d4c29.mount: Deactivated successfully. Jan 14 13:23:08.151815 containerd[1829]: time="2025-01-14T13:23:08.151658221Z" level=error msg="Failed to destroy network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.155875 containerd[1829]: time="2025-01-14T13:23:08.155066292Z" level=error msg="encountered an error cleaning up failed sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.155875 containerd[1829]: time="2025-01-14T13:23:08.155152694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.156056 kubelet[2744]: E0114 13:23:08.155428 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:08.156056 kubelet[2744]: E0114 13:23:08.155500 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:08.156056 kubelet[2744]: E0114 13:23:08.155529 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:08.156554 kubelet[2744]: E0114 13:23:08.155605 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:08.157091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50-shm.mount: Deactivated successfully. Jan 14 13:23:08.887198 kubelet[2744]: E0114 13:23:08.887124 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:09.020090 kubelet[2744]: I0114 13:23:09.019314 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50" Jan 14 13:23:09.020270 containerd[1829]: time="2025-01-14T13:23:09.020197943Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:09.020515 containerd[1829]: time="2025-01-14T13:23:09.020486349Z" level=info msg="Ensure that sandbox 625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50 in task-service has been cleanup successfully" Jan 14 13:23:09.022792 containerd[1829]: time="2025-01-14T13:23:09.020770055Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:09.022792 containerd[1829]: time="2025-01-14T13:23:09.020828456Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:09.023948 containerd[1829]: time="2025-01-14T13:23:09.023505912Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:09.023948 containerd[1829]: time="2025-01-14T13:23:09.023607814Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:09.023948 containerd[1829]: time="2025-01-14T13:23:09.023624315Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:09.024103 systemd[1]: run-netns-cni\x2d92233b01\x2d7a90\x2d0ee8\x2dbb8f\x2dc620e2dac11d.mount: Deactivated successfully. Jan 14 13:23:09.026378 containerd[1829]: time="2025-01-14T13:23:09.024844840Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:09.026378 containerd[1829]: time="2025-01-14T13:23:09.024939842Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:09.026378 containerd[1829]: time="2025-01-14T13:23:09.024954443Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:09.026378 containerd[1829]: time="2025-01-14T13:23:09.025945463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:3,}" Jan 14 13:23:09.125250 containerd[1829]: time="2025-01-14T13:23:09.125199346Z" level=error msg="Failed to destroy network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.125839 containerd[1829]: time="2025-01-14T13:23:09.125754257Z" level=error msg="encountered an error cleaning up failed sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.128458 containerd[1829]: time="2025-01-14T13:23:09.125856460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.128559 kubelet[2744]: E0114 13:23:09.127960 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:09.128559 kubelet[2744]: E0114 13:23:09.128033 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:09.128559 kubelet[2744]: E0114 13:23:09.128064 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:09.128708 kubelet[2744]: E0114 13:23:09.128134 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:09.129216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e-shm.mount: Deactivated successfully. Jan 14 13:23:09.875187 kubelet[2744]: E0114 13:23:09.875138 2744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:09.887386 kubelet[2744]: E0114 13:23:09.887316 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:10.023849 kubelet[2744]: I0114 13:23:10.023795 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e" Jan 14 13:23:10.024680 containerd[1829]: time="2025-01-14T13:23:10.024632016Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:10.025202 containerd[1829]: time="2025-01-14T13:23:10.024908322Z" level=info msg="Ensure that sandbox 941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e in task-service has been cleanup successfully" Jan 14 13:23:10.025202 containerd[1829]: time="2025-01-14T13:23:10.025163627Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:10.025202 containerd[1829]: time="2025-01-14T13:23:10.025201928Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:10.027977 systemd[1]: run-netns-cni\x2d2664dda0\x2d7c6e\x2d0b2e\x2d5eeb\x2df0b523e4bd26.mount: Deactivated successfully. Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.028429696Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.028546398Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.028606699Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.028980907Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.029084609Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:10.029228 containerd[1829]: time="2025-01-14T13:23:10.029098710Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:10.030531 containerd[1829]: time="2025-01-14T13:23:10.030350636Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:10.030531 containerd[1829]: time="2025-01-14T13:23:10.030444338Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:10.030531 containerd[1829]: time="2025-01-14T13:23:10.030461438Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:10.031245 containerd[1829]: time="2025-01-14T13:23:10.031214754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:4,}" Jan 14 13:23:10.133532 containerd[1829]: time="2025-01-14T13:23:10.133352897Z" level=error msg="Failed to destroy network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.136271 containerd[1829]: time="2025-01-14T13:23:10.136085054Z" level=error msg="encountered an error cleaning up failed sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.136271 containerd[1829]: time="2025-01-14T13:23:10.136170456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.137735 kubelet[2744]: E0114 13:23:10.136511 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:10.137735 kubelet[2744]: E0114 13:23:10.136585 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:10.137735 kubelet[2744]: E0114 13:23:10.136615 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:10.137141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d-shm.mount: Deactivated successfully. Jan 14 13:23:10.138045 kubelet[2744]: E0114 13:23:10.136700 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:10.887862 kubelet[2744]: E0114 13:23:10.887675 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:11.028995 kubelet[2744]: I0114 13:23:11.028908 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d" Jan 14 13:23:11.031102 containerd[1829]: time="2025-01-14T13:23:11.030609322Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:11.031102 containerd[1829]: time="2025-01-14T13:23:11.030966829Z" level=info msg="Ensure that sandbox 31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d in task-service has been cleanup successfully" Jan 14 13:23:11.036683 containerd[1829]: time="2025-01-14T13:23:11.035358921Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:11.036683 containerd[1829]: time="2025-01-14T13:23:11.035411122Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:11.035965 systemd[1]: run-netns-cni\x2d0de57229\x2dc7be\x2d6195\x2da935\x2de366f74c5ae8.mount: Deactivated successfully. Jan 14 13:23:11.038015 containerd[1829]: time="2025-01-14T13:23:11.037656969Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:11.038384 containerd[1829]: time="2025-01-14T13:23:11.038230281Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:11.038384 containerd[1829]: time="2025-01-14T13:23:11.038290683Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:11.038867 containerd[1829]: time="2025-01-14T13:23:11.038628490Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:11.038867 containerd[1829]: time="2025-01-14T13:23:11.038726792Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:11.038867 containerd[1829]: time="2025-01-14T13:23:11.038739392Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:11.039208 containerd[1829]: time="2025-01-14T13:23:11.039074799Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:11.039208 containerd[1829]: time="2025-01-14T13:23:11.039159501Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:11.039477 containerd[1829]: time="2025-01-14T13:23:11.039255503Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:11.039569 containerd[1829]: time="2025-01-14T13:23:11.039537109Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:11.039636 containerd[1829]: time="2025-01-14T13:23:11.039619511Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:11.039682 containerd[1829]: time="2025-01-14T13:23:11.039636411Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:11.040399 containerd[1829]: time="2025-01-14T13:23:11.040362926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:5,}" Jan 14 13:23:11.559747 containerd[1829]: time="2025-01-14T13:23:11.559691622Z" level=error msg="Failed to destroy network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.563798 containerd[1829]: time="2025-01-14T13:23:11.563522202Z" level=error msg="encountered an error cleaning up failed sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.563798 containerd[1829]: time="2025-01-14T13:23:11.563606304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.564574 kubelet[2744]: E0114 13:23:11.564133 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:11.564574 kubelet[2744]: E0114 13:23:11.564196 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:11.564574 kubelet[2744]: E0114 13:23:11.564231 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:11.564870 kubelet[2744]: E0114 13:23:11.564308 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:11.564589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527-shm.mount: Deactivated successfully. Jan 14 13:23:11.888935 kubelet[2744]: E0114 13:23:11.888789 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:11.901339 kubelet[2744]: I0114 13:23:11.899694 2744 topology_manager.go:215] "Topology Admit Handler" podUID="4ef18dc5-ee1e-49b7-82af-588d4979448e" podNamespace="default" podName="nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:11.924258 kubelet[2744]: I0114 13:23:11.924228 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf5nt\" (UniqueName: \"kubernetes.io/projected/4ef18dc5-ee1e-49b7-82af-588d4979448e-kube-api-access-sf5nt\") pod \"nginx-deployment-6d5f899847-cq5pv\" (UID: \"4ef18dc5-ee1e-49b7-82af-588d4979448e\") " pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:12.037096 kubelet[2744]: I0114 13:23:12.037065 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527" Jan 14 13:23:12.037987 containerd[1829]: time="2025-01-14T13:23:12.037940156Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:12.038436 containerd[1829]: time="2025-01-14T13:23:12.038238862Z" level=info msg="Ensure that sandbox 4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527 in task-service has been cleanup successfully" Jan 14 13:23:12.043556 containerd[1829]: time="2025-01-14T13:23:12.042407249Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:12.043556 containerd[1829]: time="2025-01-14T13:23:12.042456350Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:12.043142 systemd[1]: run-netns-cni\x2def6c811c\x2d773c\x2d59c5\x2d6bb5\x2dbab0e07e5501.mount: Deactivated successfully. Jan 14 13:23:12.049401 containerd[1829]: time="2025-01-14T13:23:12.049077289Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:12.049401 containerd[1829]: time="2025-01-14T13:23:12.049180091Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:12.049401 containerd[1829]: time="2025-01-14T13:23:12.049195492Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:12.049844 containerd[1829]: time="2025-01-14T13:23:12.049614200Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:12.049844 containerd[1829]: time="2025-01-14T13:23:12.049696302Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:12.049844 containerd[1829]: time="2025-01-14T13:23:12.049710102Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:12.050234 containerd[1829]: time="2025-01-14T13:23:12.050214813Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:12.050611 containerd[1829]: time="2025-01-14T13:23:12.050365816Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:12.050611 containerd[1829]: time="2025-01-14T13:23:12.050381817Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:12.051045 containerd[1829]: time="2025-01-14T13:23:12.050889727Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:12.051045 containerd[1829]: time="2025-01-14T13:23:12.050976329Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:12.051045 containerd[1829]: time="2025-01-14T13:23:12.050989529Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:12.051637 containerd[1829]: time="2025-01-14T13:23:12.051447939Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:12.051637 containerd[1829]: time="2025-01-14T13:23:12.051535841Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:12.051637 containerd[1829]: time="2025-01-14T13:23:12.051550541Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:12.053901 containerd[1829]: time="2025-01-14T13:23:12.053395980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:6,}" Jan 14 13:23:12.178991 containerd[1829]: time="2025-01-14T13:23:12.178939114Z" level=error msg="Failed to destroy network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.180112 containerd[1829]: time="2025-01-14T13:23:12.180064737Z" level=error msg="encountered an error cleaning up failed sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.180222 containerd[1829]: time="2025-01-14T13:23:12.180156239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.182818 kubelet[2744]: E0114 13:23:12.180436 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.182818 kubelet[2744]: E0114 13:23:12.180500 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:12.182818 kubelet[2744]: E0114 13:23:12.180538 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:12.182499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613-shm.mount: Deactivated successfully. Jan 14 13:23:12.183092 kubelet[2744]: E0114 13:23:12.180614 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:12.207637 containerd[1829]: time="2025-01-14T13:23:12.207272008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:0,}" Jan 14 13:23:12.334347 containerd[1829]: time="2025-01-14T13:23:12.334135670Z" level=error msg="Failed to destroy network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.335240 containerd[1829]: time="2025-01-14T13:23:12.334900286Z" level=error msg="encountered an error cleaning up failed sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.335240 containerd[1829]: time="2025-01-14T13:23:12.334981187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.335447 kubelet[2744]: E0114 13:23:12.335253 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:12.335447 kubelet[2744]: E0114 13:23:12.335319 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:12.335447 kubelet[2744]: E0114 13:23:12.335357 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:12.335585 kubelet[2744]: E0114 13:23:12.335428 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cq5pv" podUID="4ef18dc5-ee1e-49b7-82af-588d4979448e" Jan 14 13:23:12.889345 kubelet[2744]: E0114 13:23:12.889303 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:13.050677 kubelet[2744]: I0114 13:23:13.049959 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613" Jan 14 13:23:13.051568 containerd[1829]: time="2025-01-14T13:23:13.051530921Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:13.052633 kubelet[2744]: I0114 13:23:13.052329 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79" Jan 14 13:23:13.053162 containerd[1829]: time="2025-01-14T13:23:13.052986151Z" level=info msg="Ensure that sandbox b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613 in task-service has been cleanup successfully" Jan 14 13:23:13.054459 containerd[1829]: time="2025-01-14T13:23:13.054132075Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:13.054459 containerd[1829]: time="2025-01-14T13:23:13.054344380Z" level=info msg="Ensure that sandbox 67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79 in task-service has been cleanup successfully" Jan 14 13:23:13.054811 containerd[1829]: time="2025-01-14T13:23:13.054787689Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:13.054909 containerd[1829]: time="2025-01-14T13:23:13.054892891Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:13.055181 containerd[1829]: time="2025-01-14T13:23:13.055072895Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:13.055291 containerd[1829]: time="2025-01-14T13:23:13.055274199Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:13.057267 systemd[1]: run-netns-cni\x2dfb06d577\x2d6dfc\x2d4db7\x2d6c79\x2dede47ebb40ef.mount: Deactivated successfully. Jan 14 13:23:13.059451 containerd[1829]: time="2025-01-14T13:23:13.058171560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:1,}" Jan 14 13:23:13.060666 containerd[1829]: time="2025-01-14T13:23:13.058238762Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:13.060760 containerd[1829]: time="2025-01-14T13:23:13.060739514Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:13.060855 containerd[1829]: time="2025-01-14T13:23:13.060758114Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:13.062190 containerd[1829]: time="2025-01-14T13:23:13.062166544Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:13.062550 containerd[1829]: time="2025-01-14T13:23:13.062390249Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:13.062550 containerd[1829]: time="2025-01-14T13:23:13.062413549Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:13.062678 systemd[1]: run-netns-cni\x2d4c004963\x2d2a9c\x2d1a80\x2d9d8d\x2d0bdd303444c3.mount: Deactivated successfully. Jan 14 13:23:13.063322 containerd[1829]: time="2025-01-14T13:23:13.063165565Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:13.063322 containerd[1829]: time="2025-01-14T13:23:13.063254567Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:13.063322 containerd[1829]: time="2025-01-14T13:23:13.063268267Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:13.064901 containerd[1829]: time="2025-01-14T13:23:13.064634296Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:13.064976 containerd[1829]: time="2025-01-14T13:23:13.064899101Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:13.064976 containerd[1829]: time="2025-01-14T13:23:13.064914702Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:13.065656 containerd[1829]: time="2025-01-14T13:23:13.065621616Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:13.065794 containerd[1829]: time="2025-01-14T13:23:13.065756119Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:13.066054 containerd[1829]: time="2025-01-14T13:23:13.065783720Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:13.066715 containerd[1829]: time="2025-01-14T13:23:13.066456634Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:13.066858 containerd[1829]: time="2025-01-14T13:23:13.066753140Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:13.066858 containerd[1829]: time="2025-01-14T13:23:13.066818442Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:13.067441 containerd[1829]: time="2025-01-14T13:23:13.067415154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:7,}" Jan 14 13:23:13.230002 containerd[1829]: time="2025-01-14T13:23:13.229946564Z" level=error msg="Failed to destroy network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.231306 containerd[1829]: time="2025-01-14T13:23:13.231158589Z" level=error msg="encountered an error cleaning up failed sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.231306 containerd[1829]: time="2025-01-14T13:23:13.231239991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.232126 kubelet[2744]: E0114 13:23:13.231597 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.232126 kubelet[2744]: E0114 13:23:13.231676 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:13.232126 kubelet[2744]: E0114 13:23:13.231711 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:13.232358 kubelet[2744]: E0114 13:23:13.231802 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cq5pv" podUID="4ef18dc5-ee1e-49b7-82af-588d4979448e" Jan 14 13:23:13.247187 containerd[1829]: time="2025-01-14T13:23:13.246931220Z" level=error msg="Failed to destroy network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.247290 containerd[1829]: time="2025-01-14T13:23:13.247243027Z" level=error msg="encountered an error cleaning up failed sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.247345 containerd[1829]: time="2025-01-14T13:23:13.247312328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.247600 kubelet[2744]: E0114 13:23:13.247558 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:13.247741 kubelet[2744]: E0114 13:23:13.247615 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:13.247741 kubelet[2744]: E0114 13:23:13.247647 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:13.247741 kubelet[2744]: E0114 13:23:13.247713 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:13.890335 kubelet[2744]: E0114 13:23:13.890276 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:14.041884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8-shm.mount: Deactivated successfully. Jan 14 13:23:14.060809 kubelet[2744]: I0114 13:23:14.060182 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b" Jan 14 13:23:14.062005 containerd[1829]: time="2025-01-14T13:23:14.061966920Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:14.062716 containerd[1829]: time="2025-01-14T13:23:14.062688835Z" level=info msg="Ensure that sandbox d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b in task-service has been cleanup successfully" Jan 14 13:23:14.064966 containerd[1829]: time="2025-01-14T13:23:14.064826279Z" level=info msg="TearDown network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" successfully" Jan 14 13:23:14.064966 containerd[1829]: time="2025-01-14T13:23:14.064857080Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" returns successfully" Jan 14 13:23:14.066494 systemd[1]: run-netns-cni\x2d8f6c9996\x2de5c3\x2d4fdf\x2da0cb\x2d9b8aa9ee5255.mount: Deactivated successfully. Jan 14 13:23:14.069263 kubelet[2744]: I0114 13:23:14.068463 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8" Jan 14 13:23:14.069371 containerd[1829]: time="2025-01-14T13:23:14.069102764Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:14.069371 containerd[1829]: time="2025-01-14T13:23:14.069359770Z" level=info msg="Ensure that sandbox 929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8 in task-service has been cleanup successfully" Jan 14 13:23:14.069906 containerd[1829]: time="2025-01-14T13:23:14.069541173Z" level=info msg="TearDown network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" successfully" Jan 14 13:23:14.069906 containerd[1829]: time="2025-01-14T13:23:14.069562774Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" returns successfully" Jan 14 13:23:14.069906 containerd[1829]: time="2025-01-14T13:23:14.069632475Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:14.069906 containerd[1829]: time="2025-01-14T13:23:14.069707776Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:14.069906 containerd[1829]: time="2025-01-14T13:23:14.069722377Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:14.073684 systemd[1]: run-netns-cni\x2de32280fb\x2dc29d\x2d2144\x2dd31c\x2d7a1cd56a6f2e.mount: Deactivated successfully. Jan 14 13:23:14.074364 containerd[1829]: time="2025-01-14T13:23:14.074066363Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:14.074364 containerd[1829]: time="2025-01-14T13:23:14.074157665Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:14.074364 containerd[1829]: time="2025-01-14T13:23:14.074171665Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:14.075359 containerd[1829]: time="2025-01-14T13:23:14.075164485Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:14.075604 containerd[1829]: time="2025-01-14T13:23:14.075530293Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:14.075604 containerd[1829]: time="2025-01-14T13:23:14.075551893Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:14.075877 containerd[1829]: time="2025-01-14T13:23:14.075743397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:2,}" Jan 14 13:23:14.076582 containerd[1829]: time="2025-01-14T13:23:14.076548813Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:14.076656 containerd[1829]: time="2025-01-14T13:23:14.076637215Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:14.076656 containerd[1829]: time="2025-01-14T13:23:14.076652015Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:14.077412 containerd[1829]: time="2025-01-14T13:23:14.077252827Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:14.077412 containerd[1829]: time="2025-01-14T13:23:14.077343729Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:14.077412 containerd[1829]: time="2025-01-14T13:23:14.077357129Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:14.078177 containerd[1829]: time="2025-01-14T13:23:14.078065843Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:14.078177 containerd[1829]: time="2025-01-14T13:23:14.078152945Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:14.078177 containerd[1829]: time="2025-01-14T13:23:14.078167345Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:14.078555 containerd[1829]: time="2025-01-14T13:23:14.078528152Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:14.078644 containerd[1829]: time="2025-01-14T13:23:14.078615154Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:14.078644 containerd[1829]: time="2025-01-14T13:23:14.078629754Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:14.080059 containerd[1829]: time="2025-01-14T13:23:14.080021082Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:14.080160 containerd[1829]: time="2025-01-14T13:23:14.080110384Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:14.080160 containerd[1829]: time="2025-01-14T13:23:14.080126984Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:14.081077 containerd[1829]: time="2025-01-14T13:23:14.081047402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:8,}" Jan 14 13:23:14.237908 containerd[1829]: time="2025-01-14T13:23:14.236700205Z" level=error msg="Failed to destroy network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.237908 containerd[1829]: time="2025-01-14T13:23:14.237072813Z" level=error msg="encountered an error cleaning up failed sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.237908 containerd[1829]: time="2025-01-14T13:23:14.237148314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.238297 kubelet[2744]: E0114 13:23:14.237419 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.238297 kubelet[2744]: E0114 13:23:14.237490 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:14.238297 kubelet[2744]: E0114 13:23:14.237518 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:14.238531 kubelet[2744]: E0114 13:23:14.237586 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cq5pv" podUID="4ef18dc5-ee1e-49b7-82af-588d4979448e" Jan 14 13:23:14.253613 containerd[1829]: time="2025-01-14T13:23:14.253557541Z" level=error msg="Failed to destroy network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.254019 containerd[1829]: time="2025-01-14T13:23:14.253919148Z" level=error msg="encountered an error cleaning up failed sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.254019 containerd[1829]: time="2025-01-14T13:23:14.253999950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.254323 kubelet[2744]: E0114 13:23:14.254275 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:14.254394 kubelet[2744]: E0114 13:23:14.254343 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:14.254394 kubelet[2744]: E0114 13:23:14.254373 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:14.254482 kubelet[2744]: E0114 13:23:14.254441 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:14.892798 kubelet[2744]: E0114 13:23:14.891425 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:15.041808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6-shm.mount: Deactivated successfully. Jan 14 13:23:15.042025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8-shm.mount: Deactivated successfully. Jan 14 13:23:15.042173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495199159.mount: Deactivated successfully. Jan 14 13:23:15.067722 containerd[1829]: time="2025-01-14T13:23:15.067635568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.071491 containerd[1829]: time="2025-01-14T13:23:15.071255340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 14 13:23:15.076930 containerd[1829]: time="2025-01-14T13:23:15.075287721Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.079917 kubelet[2744]: I0114 13:23:15.079895 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6" Jan 14 13:23:15.080939 containerd[1829]: time="2025-01-14T13:23:15.080867232Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" Jan 14 13:23:15.081138 containerd[1829]: time="2025-01-14T13:23:15.081112937Z" level=info msg="Ensure that sandbox 0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6 in task-service has been cleanup successfully" Jan 14 13:23:15.082819 containerd[1829]: time="2025-01-14T13:23:15.081307241Z" level=info msg="TearDown network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" successfully" Jan 14 13:23:15.082819 containerd[1829]: time="2025-01-14T13:23:15.081328441Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" returns successfully" Jan 14 13:23:15.083700 containerd[1829]: time="2025-01-14T13:23:15.083357182Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:15.083700 containerd[1829]: time="2025-01-14T13:23:15.083446383Z" level=info msg="TearDown network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" successfully" Jan 14 13:23:15.083700 containerd[1829]: time="2025-01-14T13:23:15.083459384Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" returns successfully" Jan 14 13:23:15.085647 containerd[1829]: time="2025-01-14T13:23:15.083859692Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:15.085647 containerd[1829]: time="2025-01-14T13:23:15.083942193Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:15.085647 containerd[1829]: time="2025-01-14T13:23:15.083955594Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:15.085647 containerd[1829]: time="2025-01-14T13:23:15.085333921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:15.086706 kubelet[2744]: I0114 13:23:15.085054 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8" Jan 14 13:23:15.083882 systemd[1]: run-netns-cni\x2d9c8bd389\x2d306a\x2df7bd\x2d17cc\x2d20c19269c18f.mount: Deactivated successfully. Jan 14 13:23:15.087094 containerd[1829]: time="2025-01-14T13:23:15.086291040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.07575746s" Jan 14 13:23:15.087094 containerd[1829]: time="2025-01-14T13:23:15.086320741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 14 13:23:15.090621 containerd[1829]: time="2025-01-14T13:23:15.087286860Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:15.090621 containerd[1829]: time="2025-01-14T13:23:15.087382262Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:15.090621 containerd[1829]: time="2025-01-14T13:23:15.087399062Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:15.090621 containerd[1829]: time="2025-01-14T13:23:15.087403162Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" Jan 14 13:23:15.090621 containerd[1829]: time="2025-01-14T13:23:15.087639367Z" level=info msg="Ensure that sandbox 8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8 in task-service has been cleanup successfully" Jan 14 13:23:15.091028 containerd[1829]: time="2025-01-14T13:23:15.091005834Z" level=info msg="TearDown network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" successfully" Jan 14 13:23:15.091108 containerd[1829]: time="2025-01-14T13:23:15.091092836Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" returns successfully" Jan 14 13:23:15.091875 systemd[1]: run-netns-cni\x2dfa061442\x2d20f5\x2d460f\x2d2208\x2db89eddae821f.mount: Deactivated successfully. Jan 14 13:23:15.092275 containerd[1829]: time="2025-01-14T13:23:15.092247459Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:15.092354 containerd[1829]: time="2025-01-14T13:23:15.092337561Z" level=info msg="TearDown network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" successfully" Jan 14 13:23:15.092400 containerd[1829]: time="2025-01-14T13:23:15.092352561Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" returns successfully" Jan 14 13:23:15.092472 containerd[1829]: time="2025-01-14T13:23:15.092449863Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:15.092552 containerd[1829]: time="2025-01-14T13:23:15.092535165Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:15.092592 containerd[1829]: time="2025-01-14T13:23:15.092554065Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096201538Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096348841Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096365141Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096238438Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096478643Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:15.099794 containerd[1829]: time="2025-01-14T13:23:15.096491943Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:15.104114 containerd[1829]: time="2025-01-14T13:23:15.103936492Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:15.104114 containerd[1829]: time="2025-01-14T13:23:15.103946192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:3,}" Jan 14 13:23:15.104114 containerd[1829]: time="2025-01-14T13:23:15.104050794Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:15.104114 containerd[1829]: time="2025-01-14T13:23:15.104064694Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:15.104595 containerd[1829]: time="2025-01-14T13:23:15.104569304Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:15.104676 containerd[1829]: time="2025-01-14T13:23:15.104658106Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:15.104676 containerd[1829]: time="2025-01-14T13:23:15.104672407Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:15.105793 containerd[1829]: time="2025-01-14T13:23:15.105754028Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:15.106256 containerd[1829]: time="2025-01-14T13:23:15.106225138Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:15.106323 containerd[1829]: time="2025-01-14T13:23:15.106255038Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:15.107325 containerd[1829]: time="2025-01-14T13:23:15.107034854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:9,}" Jan 14 13:23:15.107579 containerd[1829]: time="2025-01-14T13:23:15.107549864Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:23:15.193928 containerd[1829]: time="2025-01-14T13:23:15.193763482Z" level=info msg="CreateContainer within sandbox \"f886007c916c9f4b85b15f2c0cbd13d292faa29292c9068f3cc1ebe444863d74\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f72256ae3defd6c5697835b8e53f5d4cee4c0cb2697b73312ccad6aadebf352\"" Jan 14 13:23:15.196105 containerd[1829]: time="2025-01-14T13:23:15.196072428Z" level=info msg="StartContainer for \"0f72256ae3defd6c5697835b8e53f5d4cee4c0cb2697b73312ccad6aadebf352\"" Jan 14 13:23:15.286866 containerd[1829]: time="2025-01-14T13:23:15.286707835Z" level=error msg="Failed to destroy network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.288568 containerd[1829]: time="2025-01-14T13:23:15.288299767Z" level=error msg="encountered an error cleaning up failed sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.289081 containerd[1829]: time="2025-01-14T13:23:15.289021881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.290017 kubelet[2744]: E0114 13:23:15.289764 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.290017 kubelet[2744]: E0114 13:23:15.289953 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:15.290017 kubelet[2744]: E0114 13:23:15.289987 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hq5z2" Jan 14 13:23:15.290785 kubelet[2744]: E0114 13:23:15.290612 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hq5z2_calico-system(a70e7d33-f96f-4604-b940-93eea95840a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hq5z2" podUID="a70e7d33-f96f-4604-b940-93eea95840a3" Jan 14 13:23:15.301038 containerd[1829]: time="2025-01-14T13:23:15.300993220Z" level=info msg="StartContainer for \"0f72256ae3defd6c5697835b8e53f5d4cee4c0cb2697b73312ccad6aadebf352\" returns successfully" Jan 14 13:23:15.304340 containerd[1829]: time="2025-01-14T13:23:15.303821876Z" level=error msg="Failed to destroy network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.304340 containerd[1829]: time="2025-01-14T13:23:15.304141583Z" level=error msg="encountered an error cleaning up failed sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.304340 containerd[1829]: time="2025-01-14T13:23:15.304221584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.304733 kubelet[2744]: E0114 13:23:15.304691 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:23:15.304830 kubelet[2744]: E0114 13:23:15.304752 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:15.304830 kubelet[2744]: E0114 13:23:15.304816 2744 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cq5pv" Jan 14 13:23:15.304927 kubelet[2744]: E0114 13:23:15.304902 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-cq5pv_default(4ef18dc5-ee1e-49b7-82af-588d4979448e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cq5pv" podUID="4ef18dc5-ee1e-49b7-82af-588d4979448e" Jan 14 13:23:15.554163 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:23:15.554336 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:23:15.892297 kubelet[2744]: E0114 13:23:15.892242 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:16.102676 kubelet[2744]: I0114 13:23:16.102057 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7" Jan 14 13:23:16.103852 containerd[1829]: time="2025-01-14T13:23:16.103396014Z" level=info msg="StopPodSandbox for \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\"" Jan 14 13:23:16.103852 containerd[1829]: time="2025-01-14T13:23:16.103660220Z" level=info msg="Ensure that sandbox 567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7 in task-service has been cleanup successfully" Jan 14 13:23:16.108660 systemd[1]: run-netns-cni\x2df4ba65b1\x2d0d2e\x2d1bec\x2dcf2a\x2d42df195ed23a.mount: Deactivated successfully. Jan 14 13:23:16.112414 kubelet[2744]: I0114 13:23:16.112034 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9cv7l" podStartSLOduration=5.303032268 podStartE2EDuration="26.111981585s" podCreationTimestamp="2025-01-14 13:22:50 +0000 UTC" firstStartedPulling="2025-01-14 13:22:54.278973656 +0000 UTC m=+5.099164918" lastFinishedPulling="2025-01-14 13:23:15.087923073 +0000 UTC m=+25.908114235" observedRunningTime="2025-01-14 13:23:16.111641079 +0000 UTC m=+26.931832241" watchObservedRunningTime="2025-01-14 13:23:16.111981585 +0000 UTC m=+26.932172847" Jan 14 13:23:16.112599 containerd[1829]: time="2025-01-14T13:23:16.110542057Z" level=info msg="TearDown network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" successfully" Jan 14 13:23:16.113026 containerd[1829]: time="2025-01-14T13:23:16.112395894Z" level=info msg="StopPodSandbox for \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" returns successfully" Jan 14 13:23:16.113900 containerd[1829]: time="2025-01-14T13:23:16.113633518Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" Jan 14 13:23:16.113900 containerd[1829]: time="2025-01-14T13:23:16.113739220Z" level=info msg="TearDown network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" successfully" Jan 14 13:23:16.113900 containerd[1829]: time="2025-01-14T13:23:16.113755321Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" returns successfully" Jan 14 13:23:16.114538 kubelet[2744]: I0114 13:23:16.114518 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a" Jan 14 13:23:16.116210 containerd[1829]: time="2025-01-14T13:23:16.115913264Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:16.116210 containerd[1829]: time="2025-01-14T13:23:16.116002666Z" level=info msg="TearDown network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" successfully" Jan 14 13:23:16.116210 containerd[1829]: time="2025-01-14T13:23:16.116016766Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" returns successfully" Jan 14 13:23:16.118837 containerd[1829]: time="2025-01-14T13:23:16.118805421Z" level=info msg="StopPodSandbox for \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\"" Jan 14 13:23:16.119243 containerd[1829]: time="2025-01-14T13:23:16.119219330Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:16.119663 containerd[1829]: time="2025-01-14T13:23:16.119643638Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:16.120132 containerd[1829]: time="2025-01-14T13:23:16.120111247Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:16.120848 containerd[1829]: time="2025-01-14T13:23:16.120709059Z" level=info msg="Ensure that sandbox cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a in task-service has been cleanup successfully" Jan 14 13:23:16.123794 containerd[1829]: time="2025-01-14T13:23:16.121612477Z" level=info msg="TearDown network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" successfully" Jan 14 13:23:16.123794 containerd[1829]: time="2025-01-14T13:23:16.121634478Z" level=info msg="StopPodSandbox for \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" returns successfully" Jan 14 13:23:16.125343 containerd[1829]: time="2025-01-14T13:23:16.125189649Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:16.125343 containerd[1829]: time="2025-01-14T13:23:16.125274450Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:16.125343 containerd[1829]: time="2025-01-14T13:23:16.125289051Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:16.126167 systemd[1]: run-netns-cni\x2dce0edfc1\x2d2ef1\x2dae2f\x2d227f\x2d8024b3d84674.mount: Deactivated successfully. Jan 14 13:23:16.126945 containerd[1829]: time="2025-01-14T13:23:16.125089347Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" Jan 14 13:23:16.126945 containerd[1829]: time="2025-01-14T13:23:16.126544876Z" level=info msg="TearDown network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" successfully" Jan 14 13:23:16.126945 containerd[1829]: time="2025-01-14T13:23:16.126558176Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" returns successfully" Jan 14 13:23:16.128504 containerd[1829]: time="2025-01-14T13:23:16.128482214Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:16.128705 containerd[1829]: time="2025-01-14T13:23:16.128685518Z" level=info msg="TearDown network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" successfully" Jan 14 13:23:16.128832 containerd[1829]: time="2025-01-14T13:23:16.128755520Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" returns successfully" Jan 14 13:23:16.129213 containerd[1829]: time="2025-01-14T13:23:16.128491714Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:16.129213 containerd[1829]: time="2025-01-14T13:23:16.129007625Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:16.129213 containerd[1829]: time="2025-01-14T13:23:16.129021225Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:16.129856 containerd[1829]: time="2025-01-14T13:23:16.129747140Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:16.130025 containerd[1829]: time="2025-01-14T13:23:16.129887542Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:16.130025 containerd[1829]: time="2025-01-14T13:23:16.129946143Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:16.130025 containerd[1829]: time="2025-01-14T13:23:16.130003845Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:16.130145 containerd[1829]: time="2025-01-14T13:23:16.130075046Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:16.130145 containerd[1829]: time="2025-01-14T13:23:16.130087346Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:16.131375 containerd[1829]: time="2025-01-14T13:23:16.131353272Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:16.131556 containerd[1829]: time="2025-01-14T13:23:16.131539375Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:16.131633 containerd[1829]: time="2025-01-14T13:23:16.131618977Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:16.131933 containerd[1829]: time="2025-01-14T13:23:16.131909683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:4,}" Jan 14 13:23:16.132629 containerd[1829]: time="2025-01-14T13:23:16.132232389Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:16.132629 containerd[1829]: time="2025-01-14T13:23:16.132462694Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:16.132629 containerd[1829]: time="2025-01-14T13:23:16.132474894Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:16.133665 containerd[1829]: time="2025-01-14T13:23:16.133512615Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:16.133665 containerd[1829]: time="2025-01-14T13:23:16.133602816Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:16.133665 containerd[1829]: time="2025-01-14T13:23:16.133614617Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:16.135487 containerd[1829]: time="2025-01-14T13:23:16.135443853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:10,}" Jan 14 13:23:16.317441 systemd-networkd[1371]: cali363343a5f96: Link UP Jan 14 13:23:16.317657 systemd-networkd[1371]: cali363343a5f96: Gained carrier Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.215 [INFO][3751] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.229 [INFO][3751] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.33-k8s-csi--node--driver--hq5z2-eth0 csi-node-driver- calico-system a70e7d33-f96f-4604-b940-93eea95840a3 1201 0 2025-01-14 13:22:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.33 csi-node-driver-hq5z2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali363343a5f96 [] []}} ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.230 [INFO][3751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.267 [INFO][3773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" HandleID="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Workload="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" HandleID="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Workload="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d80), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.33", "pod":"csi-node-driver-hq5z2", "timestamp":"2025-01-14 13:23:16.267312982 +0000 UTC"}, Hostname:"10.200.4.33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3773] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.33' Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.280 [INFO][3773] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.284 [INFO][3773] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.288 [INFO][3773] ipam/ipam.go 489: Trying affinity for 192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.289 [INFO][3773] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.292 [INFO][3773] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.292 [INFO][3773] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.293 [INFO][3773] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.297 [INFO][3773] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3773] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.193/26] block=192.168.102.192/26 handle="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3773] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.193/26] handle="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" host="10.200.4.33" Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:16.334514 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.193/26] IPv6=[] ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" HandleID="k8s-pod-network.f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Workload="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.309 [INFO][3751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-csi--node--driver--hq5z2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a70e7d33-f96f-4604-b940-93eea95840a3", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"", Pod:"csi-node-driver-hq5z2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali363343a5f96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.309 [INFO][3751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.193/32] ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.309 [INFO][3751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali363343a5f96 ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.317 [INFO][3751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.318 [INFO][3751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-csi--node--driver--hq5z2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a70e7d33-f96f-4604-b940-93eea95840a3", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd", Pod:"csi-node-driver-hq5z2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali363343a5f96", MAC:"26:47:46:3e:d9:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.335484 containerd[1829]: 2025-01-14 13:23:16.331 [INFO][3751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd" Namespace="calico-system" Pod="csi-node-driver-hq5z2" WorkloadEndpoint="10.200.4.33-k8s-csi--node--driver--hq5z2-eth0" Jan 14 13:23:16.351832 systemd-networkd[1371]: cali342fd34ae1e: Link UP Jan 14 13:23:16.353567 systemd-networkd[1371]: cali342fd34ae1e: Gained carrier Jan 14 13:23:16.366730 containerd[1829]: time="2025-01-14T13:23:16.364853326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:16.366730 containerd[1829]: time="2025-01-14T13:23:16.364923927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:16.366730 containerd[1829]: time="2025-01-14T13:23:16.364944328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.366730 containerd[1829]: time="2025-01-14T13:23:16.365061130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.215 [INFO][3756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.230 [INFO][3756] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0 nginx-deployment-6d5f899847- default 4ef18dc5-ee1e-49b7-82af-588d4979448e 1295 0 2025-01-14 13:23:11 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.33 nginx-deployment-6d5f899847-cq5pv eth0 default [] [] [kns.default ksa.default.default] cali342fd34ae1e [] []}} ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.230 [INFO][3756] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.267 [INFO][3772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" HandleID="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Workload="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" HandleID="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Workload="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.33", "pod":"nginx-deployment-6d5f899847-cq5pv", "timestamp":"2025-01-14 13:23:16.267312782 +0000 UTC"}, Hostname:"10.200.4.33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.278 [INFO][3772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.306 [INFO][3772] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.33' Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.308 [INFO][3772] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.312 [INFO][3772] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.319 [INFO][3772] ipam/ipam.go 489: Trying affinity for 192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.321 [INFO][3772] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.323 [INFO][3772] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.323 [INFO][3772] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.330 [INFO][3772] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1 Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.336 [INFO][3772] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.345 [INFO][3772] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.194/26] block=192.168.102.192/26 handle="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.345 [INFO][3772] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.194/26] handle="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" host="10.200.4.33" Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.345 [INFO][3772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:16.366730 containerd[1829]: 2025-01-14 13:23:16.345 [INFO][3772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.194/26] IPv6=[] ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" HandleID="k8s-pod-network.416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Workload="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.349 [INFO][3756] cni-plugin/k8s.go 386: Populated endpoint ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"4ef18dc5-ee1e-49b7-82af-588d4979448e", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"", Pod:"nginx-deployment-6d5f899847-cq5pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.102.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali342fd34ae1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.349 [INFO][3756] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.194/32] ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.349 [INFO][3756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali342fd34ae1e ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.351 [INFO][3756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.355 [INFO][3756] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"4ef18dc5-ee1e-49b7-82af-588d4979448e", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1", Pod:"nginx-deployment-6d5f899847-cq5pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.102.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali342fd34ae1e", MAC:"12:62:24:17:f5:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:16.367896 containerd[1829]: 2025-01-14 13:23:16.363 [INFO][3756] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1" Namespace="default" Pod="nginx-deployment-6d5f899847-cq5pv" WorkloadEndpoint="10.200.4.33-k8s-nginx--deployment--6d5f899847--cq5pv-eth0" Jan 14 13:23:16.401633 containerd[1829]: time="2025-01-14T13:23:16.399989126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:16.401633 containerd[1829]: time="2025-01-14T13:23:16.400082328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:16.401633 containerd[1829]: time="2025-01-14T13:23:16.400146229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.401633 containerd[1829]: time="2025-01-14T13:23:16.400252032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:16.422755 containerd[1829]: time="2025-01-14T13:23:16.422709279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hq5z2,Uid:a70e7d33-f96f-4604-b940-93eea95840a3,Namespace:calico-system,Attempt:10,} returns sandbox id \"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd\"" Jan 14 13:23:16.425171 containerd[1829]: time="2025-01-14T13:23:16.424923123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 14 13:23:16.466647 containerd[1829]: time="2025-01-14T13:23:16.466607954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cq5pv,Uid:4ef18dc5-ee1e-49b7-82af-588d4979448e,Namespace:default,Attempt:4,} returns sandbox id \"416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1\"" Jan 14 13:23:16.892809 kubelet[2744]: E0114 13:23:16.892744 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:17.080887 kernel: bpftool[4004]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 14 13:23:17.378541 systemd-networkd[1371]: vxlan.calico: Link UP Jan 14 13:23:17.379988 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 14 13:23:17.647051 systemd-networkd[1371]: cali363343a5f96: Gained IPv6LL Jan 14 13:23:17.807094 containerd[1829]: time="2025-01-14T13:23:17.804397421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:17.813175 containerd[1829]: time="2025-01-14T13:23:17.813114394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 14 13:23:17.815015 containerd[1829]: time="2025-01-14T13:23:17.814978631Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:17.820119 containerd[1829]: time="2025-01-14T13:23:17.820085433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:17.820910 containerd[1829]: time="2025-01-14T13:23:17.820880249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.395919025s" Jan 14 13:23:17.821039 containerd[1829]: time="2025-01-14T13:23:17.821018752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 14 13:23:17.821693 containerd[1829]: time="2025-01-14T13:23:17.821668065Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:23:17.822905 containerd[1829]: time="2025-01-14T13:23:17.822879089Z" level=info msg="CreateContainer within sandbox \"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 14 13:23:17.866256 containerd[1829]: time="2025-01-14T13:23:17.866207253Z" level=info msg="CreateContainer within sandbox \"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fa2094d48a1feeb42bb093ab11999b8bc30885715689ac99888079158f172551\"" Jan 14 13:23:17.866755 containerd[1829]: time="2025-01-14T13:23:17.866726063Z" level=info msg="StartContainer for \"fa2094d48a1feeb42bb093ab11999b8bc30885715689ac99888079158f172551\"" Jan 14 13:23:17.894010 kubelet[2744]: E0114 13:23:17.893970 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:17.902279 systemd-networkd[1371]: cali342fd34ae1e: Gained IPv6LL Jan 14 13:23:17.937165 containerd[1829]: time="2025-01-14T13:23:17.937122466Z" level=info msg="StartContainer for \"fa2094d48a1feeb42bb093ab11999b8bc30885715689ac99888079158f172551\" returns successfully" Jan 14 13:23:18.043024 systemd[1]: run-containerd-runc-k8s.io-fa2094d48a1feeb42bb093ab11999b8bc30885715689ac99888079158f172551-runc.49wkbJ.mount: Deactivated successfully. Jan 14 13:23:18.894732 kubelet[2744]: E0114 13:23:18.894668 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:19.182067 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 14 13:23:19.895048 kubelet[2744]: E0114 13:23:19.895013 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:20.693870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654031656.mount: Deactivated successfully. Jan 14 13:23:20.895633 kubelet[2744]: E0114 13:23:20.895587 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:21.898791 kubelet[2744]: E0114 13:23:21.898676 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:21.957071 containerd[1829]: time="2025-01-14T13:23:21.957020387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:21.959484 containerd[1829]: time="2025-01-14T13:23:21.959426337Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 14 13:23:21.961662 containerd[1829]: time="2025-01-14T13:23:21.961589083Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:21.967326 containerd[1829]: time="2025-01-14T13:23:21.967262502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:21.968322 containerd[1829]: time="2025-01-14T13:23:21.968197022Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.146286052s" Jan 14 13:23:21.968322 containerd[1829]: time="2025-01-14T13:23:21.968230822Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:23:21.970072 containerd[1829]: time="2025-01-14T13:23:21.969765154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 14 13:23:21.970590 containerd[1829]: time="2025-01-14T13:23:21.970564971Z" level=info msg="CreateContainer within sandbox \"416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 14 13:23:22.009831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110580726.mount: Deactivated successfully. Jan 14 13:23:22.014891 containerd[1829]: time="2025-01-14T13:23:22.014848801Z" level=info msg="CreateContainer within sandbox \"416aa61b4c227792c02bcf2b8008c872fec6b197f7de8e65fc539adb4b2cfdd1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a59cf8d37c65614f1b05966ff3c8a6da7cb72c31776e8efea9f34998a4694b79\"" Jan 14 13:23:22.016601 containerd[1829]: time="2025-01-14T13:23:22.015505914Z" level=info msg="StartContainer for \"a59cf8d37c65614f1b05966ff3c8a6da7cb72c31776e8efea9f34998a4694b79\"" Jan 14 13:23:22.082137 containerd[1829]: time="2025-01-14T13:23:22.082006095Z" level=info msg="StartContainer for \"a59cf8d37c65614f1b05966ff3c8a6da7cb72c31776e8efea9f34998a4694b79\" returns successfully" Jan 14 13:23:22.199042 kubelet[2744]: I0114 13:23:22.199006 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-cq5pv" podStartSLOduration=5.697988025 podStartE2EDuration="11.198961281s" podCreationTimestamp="2025-01-14 13:23:11 +0000 UTC" firstStartedPulling="2025-01-14 13:23:16.467821178 +0000 UTC m=+27.288012440" lastFinishedPulling="2025-01-14 13:23:21.968794434 +0000 UTC m=+32.788985696" observedRunningTime="2025-01-14 13:23:22.19889618 +0000 UTC m=+33.019087442" watchObservedRunningTime="2025-01-14 13:23:22.198961281 +0000 UTC m=+33.019152443" Jan 14 13:23:22.899752 kubelet[2744]: E0114 13:23:22.899685 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:23.639357 containerd[1829]: time="2025-01-14T13:23:23.639280931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:23.642221 containerd[1829]: time="2025-01-14T13:23:23.642163680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 14 13:23:23.646123 containerd[1829]: time="2025-01-14T13:23:23.646074246Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:23.655679 containerd[1829]: time="2025-01-14T13:23:23.655613708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:23.656904 containerd[1829]: time="2025-01-14T13:23:23.656315920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.686479264s" Jan 14 13:23:23.656904 containerd[1829]: time="2025-01-14T13:23:23.656356621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 14 13:23:23.667452 containerd[1829]: time="2025-01-14T13:23:23.667417208Z" level=info msg="CreateContainer within sandbox \"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 14 13:23:23.701573 containerd[1829]: time="2025-01-14T13:23:23.701531187Z" level=info msg="CreateContainer within sandbox \"f984b908596391037a416861ebf8c31536e89515b9fd88d96f01f6463847c2fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c66d60acea3910503c3c884bb5b19a69d5fb52e28103c778b77634f370d2a652\"" Jan 14 13:23:23.702103 containerd[1829]: time="2025-01-14T13:23:23.702074797Z" level=info msg="StartContainer for \"c66d60acea3910503c3c884bb5b19a69d5fb52e28103c778b77634f370d2a652\"" Jan 14 13:23:23.783255 containerd[1829]: time="2025-01-14T13:23:23.783202874Z" level=info msg="StartContainer for \"c66d60acea3910503c3c884bb5b19a69d5fb52e28103c778b77634f370d2a652\" returns successfully" Jan 14 13:23:23.900058 kubelet[2744]: E0114 13:23:23.899866 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:23.979100 kubelet[2744]: I0114 13:23:23.979062 2744 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 14 13:23:23.979100 kubelet[2744]: I0114 13:23:23.979107 2744 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 14 13:23:24.210511 kubelet[2744]: I0114 13:23:24.210424 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hq5z2" podStartSLOduration=26.977757705 podStartE2EDuration="34.210385425s" podCreationTimestamp="2025-01-14 13:22:50 +0000 UTC" firstStartedPulling="2025-01-14 13:23:16.424045906 +0000 UTC m=+27.244237068" lastFinishedPulling="2025-01-14 13:23:23.656673626 +0000 UTC m=+34.476864788" observedRunningTime="2025-01-14 13:23:24.210272323 +0000 UTC m=+35.030463585" watchObservedRunningTime="2025-01-14 13:23:24.210385425 +0000 UTC m=+35.030576587" Jan 14 13:23:24.900898 kubelet[2744]: E0114 13:23:24.900835 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:25.901748 kubelet[2744]: E0114 13:23:25.901685 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:26.902687 kubelet[2744]: E0114 13:23:26.902632 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:27.903586 kubelet[2744]: E0114 13:23:27.903525 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:28.904734 kubelet[2744]: E0114 13:23:28.904686 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:29.875200 kubelet[2744]: E0114 13:23:29.875170 2744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:29.905512 kubelet[2744]: E0114 13:23:29.905470 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:29.906338 kubelet[2744]: I0114 13:23:29.906307 2744 topology_manager.go:215] "Topology Admit Handler" podUID="a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7" podNamespace="default" podName="nfs-server-provisioner-0" Jan 14 13:23:29.942398 kubelet[2744]: I0114 13:23:29.942363 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7-data\") pod \"nfs-server-provisioner-0\" (UID: \"a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7\") " pod="default/nfs-server-provisioner-0" Jan 14 13:23:29.942398 kubelet[2744]: I0114 13:23:29.942410 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpgj\" (UniqueName: \"kubernetes.io/projected/a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7-kube-api-access-5lpgj\") pod \"nfs-server-provisioner-0\" (UID: \"a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7\") " pod="default/nfs-server-provisioner-0" Jan 14 13:23:30.210182 containerd[1829]: time="2025-01-14T13:23:30.209684170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7,Namespace:default,Attempt:0,}" Jan 14 13:23:30.351249 systemd-networkd[1371]: cali60e51b789ff: Link UP Jan 14 13:23:30.351474 systemd-networkd[1371]: cali60e51b789ff: Gained carrier Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.286 [INFO][4306] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.33-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7 1397 0 2025-01-14 13:23:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.33 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.286 [INFO][4306] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.310 [INFO][4316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" HandleID="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Workload="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.319 [INFO][4316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" HandleID="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Workload="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.33", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-14 13:23:30.310115442 +0000 UTC"}, Hostname:"10.200.4.33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.319 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.319 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.319 [INFO][4316] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.33' Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.320 [INFO][4316] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.323 [INFO][4316] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.327 [INFO][4316] ipam/ipam.go 489: Trying affinity for 192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.328 [INFO][4316] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.330 [INFO][4316] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.330 [INFO][4316] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.331 [INFO][4316] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845 Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.337 [INFO][4316] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.346 [INFO][4316] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.195/26] block=192.168.102.192/26 handle="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.346 [INFO][4316] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.195/26] handle="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" host="10.200.4.33" Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.346 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:23:30.363324 containerd[1829]: 2025-01-14 13:23:30.346 [INFO][4316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.195/26] IPv6=[] ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" HandleID="k8s-pod-network.f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Workload="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.368868 containerd[1829]: 2025-01-14 13:23:30.348 [INFO][4306] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.102.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:30.368868 containerd[1829]: 2025-01-14 13:23:30.348 [INFO][4306] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.195/32] ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.368868 containerd[1829]: 2025-01-14 13:23:30.348 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.368868 containerd[1829]: 2025-01-14 13:23:30.350 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.369127 containerd[1829]: 2025-01-14 13:23:30.351 [INFO][4306] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.102.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3a:5a:df:8c:6f:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:23:30.369127 containerd[1829]: 2025-01-14 13:23:30.361 [INFO][4306] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.33-k8s-nfs--server--provisioner--0-eth0" Jan 14 13:23:30.390748 containerd[1829]: time="2025-01-14T13:23:30.390619623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:30.390748 containerd[1829]: time="2025-01-14T13:23:30.390698224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:30.391001 containerd[1829]: time="2025-01-14T13:23:30.390767725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:30.391001 containerd[1829]: time="2025-01-14T13:23:30.390948629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:30.446615 containerd[1829]: time="2025-01-14T13:23:30.446570021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a39a6d16-3e12-4ef8-9348-3f6a5d1c78b7,Namespace:default,Attempt:0,} returns sandbox id \"f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845\"" Jan 14 13:23:30.448401 containerd[1829]: time="2025-01-14T13:23:30.448371656Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 14 13:23:30.906114 kubelet[2744]: E0114 13:23:30.906059 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:31.055886 systemd[1]: run-containerd-runc-k8s.io-f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845-runc.D6k5lF.mount: Deactivated successfully. Jan 14 13:23:31.598123 systemd-networkd[1371]: cali60e51b789ff: Gained IPv6LL Jan 14 13:23:31.907141 kubelet[2744]: E0114 13:23:31.906973 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:32.907305 kubelet[2744]: E0114 13:23:32.907255 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:33.021174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278277438.mount: Deactivated successfully. Jan 14 13:23:33.909005 kubelet[2744]: E0114 13:23:33.908577 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:34.909613 kubelet[2744]: E0114 13:23:34.909544 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:35.834255 containerd[1829]: time="2025-01-14T13:23:35.834193294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:35.836821 containerd[1829]: time="2025-01-14T13:23:35.836738344Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 14 13:23:35.840181 containerd[1829]: time="2025-01-14T13:23:35.840118910Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:35.845412 containerd[1829]: time="2025-01-14T13:23:35.845344713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:35.847077 containerd[1829]: time="2025-01-14T13:23:35.846300832Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.397892574s" Jan 14 13:23:35.847077 containerd[1829]: time="2025-01-14T13:23:35.846342432Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 14 13:23:35.848542 containerd[1829]: time="2025-01-14T13:23:35.848504475Z" level=info msg="CreateContainer within sandbox \"f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 14 13:23:35.890498 containerd[1829]: time="2025-01-14T13:23:35.890445298Z" level=info msg="CreateContainer within sandbox \"f7ff55c6d7d35705c1f9e61a037c4dfdcdd4052053e921dd44be2683e33e3845\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4e91064868f84767074d586ca61baa43f12b61f3c07d86972987da0ed7edc414\"" Jan 14 13:23:35.891027 containerd[1829]: time="2025-01-14T13:23:35.890994009Z" level=info msg="StartContainer for \"4e91064868f84767074d586ca61baa43f12b61f3c07d86972987da0ed7edc414\"" Jan 14 13:23:35.910342 kubelet[2744]: E0114 13:23:35.910105 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:35.949700 containerd[1829]: time="2025-01-14T13:23:35.949650461Z" level=info msg="StartContainer for \"4e91064868f84767074d586ca61baa43f12b61f3c07d86972987da0ed7edc414\" returns successfully" Jan 14 13:23:36.911047 kubelet[2744]: E0114 13:23:36.910991 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:37.911961 kubelet[2744]: E0114 13:23:37.911900 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:38.912351 kubelet[2744]: E0114 13:23:38.912295 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:39.913513 kubelet[2744]: E0114 13:23:39.913450 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:40.914676 kubelet[2744]: E0114 13:23:40.914610 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:41.915515 kubelet[2744]: E0114 13:23:41.915460 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:42.916395 kubelet[2744]: E0114 13:23:42.916223 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:43.917179 kubelet[2744]: E0114 13:23:43.917118 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:44.917678 kubelet[2744]: E0114 13:23:44.917621 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:45.918567 kubelet[2744]: E0114 13:23:45.918505 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:46.919526 kubelet[2744]: E0114 13:23:46.919464 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:47.920625 kubelet[2744]: E0114 13:23:47.920540 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:48.921541 kubelet[2744]: E0114 13:23:48.921480 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:49.875148 kubelet[2744]: E0114 13:23:49.875085 2744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:49.899005 containerd[1829]: time="2025-01-14T13:23:49.898757958Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:49.899005 containerd[1829]: time="2025-01-14T13:23:49.898893060Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:49.899005 containerd[1829]: time="2025-01-14T13:23:49.898945661Z" level=info msg="StopPodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:49.899617 containerd[1829]: time="2025-01-14T13:23:49.899355667Z" level=info msg="RemovePodSandbox for \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:49.899617 containerd[1829]: time="2025-01-14T13:23:49.899386068Z" level=info msg="Forcibly stopping sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\"" Jan 14 13:23:49.899716 containerd[1829]: time="2025-01-14T13:23:49.899607171Z" level=info msg="TearDown network for sandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" successfully" Jan 14 13:23:49.905390 containerd[1829]: time="2025-01-14T13:23:49.905359659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.905499 containerd[1829]: time="2025-01-14T13:23:49.905413159Z" level=info msg="RemovePodSandbox \"b58d2376941d19c7af426e4138d024cb66f5a5187b8fbfac90d5fb5fa685ab16\" returns successfully" Jan 14 13:23:49.905830 containerd[1829]: time="2025-01-14T13:23:49.905806065Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:49.905927 containerd[1829]: time="2025-01-14T13:23:49.905891967Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:49.905927 containerd[1829]: time="2025-01-14T13:23:49.905907167Z" level=info msg="StopPodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:49.906225 containerd[1829]: time="2025-01-14T13:23:49.906203471Z" level=info msg="RemovePodSandbox for \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:49.906300 containerd[1829]: time="2025-01-14T13:23:49.906229072Z" level=info msg="Forcibly stopping sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\"" Jan 14 13:23:49.906346 containerd[1829]: time="2025-01-14T13:23:49.906301473Z" level=info msg="TearDown network for sandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" successfully" Jan 14 13:23:49.911614 containerd[1829]: time="2025-01-14T13:23:49.911584453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.911705 containerd[1829]: time="2025-01-14T13:23:49.911626854Z" level=info msg="RemovePodSandbox \"75f7b77c75dab5ca2924ea67ed37489d9d58e98d841312006abf604d3ad6cda0\" returns successfully" Jan 14 13:23:49.912016 containerd[1829]: time="2025-01-14T13:23:49.911928958Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:49.912121 containerd[1829]: time="2025-01-14T13:23:49.912033360Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:49.912121 containerd[1829]: time="2025-01-14T13:23:49.912048260Z" level=info msg="StopPodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:49.912352 containerd[1829]: time="2025-01-14T13:23:49.912322064Z" level=info msg="RemovePodSandbox for \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:49.912406 containerd[1829]: time="2025-01-14T13:23:49.912348565Z" level=info msg="Forcibly stopping sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\"" Jan 14 13:23:49.912515 containerd[1829]: time="2025-01-14T13:23:49.912425166Z" level=info msg="TearDown network for sandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" successfully" Jan 14 13:23:49.919353 containerd[1829]: time="2025-01-14T13:23:49.919327271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.919459 containerd[1829]: time="2025-01-14T13:23:49.919372871Z" level=info msg="RemovePodSandbox \"625d3ca2ccf6cd623eee2a2c18fa4400b8fc3d151a6bf26773e7dcdd3a789f50\" returns successfully" Jan 14 13:23:49.919749 containerd[1829]: time="2025-01-14T13:23:49.919719677Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:49.919840 containerd[1829]: time="2025-01-14T13:23:49.919821478Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:49.919971 containerd[1829]: time="2025-01-14T13:23:49.919836478Z" level=info msg="StopPodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:49.920154 containerd[1829]: time="2025-01-14T13:23:49.920100782Z" level=info msg="RemovePodSandbox for \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:49.920154 containerd[1829]: time="2025-01-14T13:23:49.920131183Z" level=info msg="Forcibly stopping sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\"" Jan 14 13:23:49.920261 containerd[1829]: time="2025-01-14T13:23:49.920207784Z" level=info msg="TearDown network for sandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" successfully" Jan 14 13:23:49.922557 kubelet[2744]: E0114 13:23:49.922520 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.926327677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.926383178Z" level=info msg="RemovePodSandbox \"941ace273429e6ef6eec0f4556d11216670c6c55ef2b38b4f9ff52b1af26c06e\" returns successfully" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.926687982Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.926813784Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.926835485Z" level=info msg="StopPodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.927104189Z" level=info msg="RemovePodSandbox for \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.927127689Z" level=info msg="Forcibly stopping sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\"" Jan 14 13:23:49.927884 containerd[1829]: time="2025-01-14T13:23:49.927199690Z" level=info msg="TearDown network for sandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" successfully" Jan 14 13:23:49.935933 containerd[1829]: time="2025-01-14T13:23:49.935891322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.936048 containerd[1829]: time="2025-01-14T13:23:49.935951123Z" level=info msg="RemovePodSandbox \"31d23f4e6457a42311b6705dbe50d50cb9bf8633cfd0d5faff891890aa66696d\" returns successfully" Jan 14 13:23:49.936329 containerd[1829]: time="2025-01-14T13:23:49.936300528Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:49.936423 containerd[1829]: time="2025-01-14T13:23:49.936404330Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:49.936471 containerd[1829]: time="2025-01-14T13:23:49.936420130Z" level=info msg="StopPodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:49.936715 containerd[1829]: time="2025-01-14T13:23:49.936686334Z" level=info msg="RemovePodSandbox for \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:49.936715 containerd[1829]: time="2025-01-14T13:23:49.936714335Z" level=info msg="Forcibly stopping sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\"" Jan 14 13:23:49.936866 containerd[1829]: time="2025-01-14T13:23:49.936814036Z" level=info msg="TearDown network for sandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" successfully" Jan 14 13:23:49.944340 containerd[1829]: time="2025-01-14T13:23:49.944311750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.944444 containerd[1829]: time="2025-01-14T13:23:49.944354451Z" level=info msg="RemovePodSandbox \"4392442aa8f23ad1f91c66351e16a2c74fce63c6d144ce59b21b994328c42527\" returns successfully" Jan 14 13:23:49.944787 containerd[1829]: time="2025-01-14T13:23:49.944670256Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:49.944787 containerd[1829]: time="2025-01-14T13:23:49.944764457Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:49.944954 containerd[1829]: time="2025-01-14T13:23:49.944793457Z" level=info msg="StopPodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:49.945826 containerd[1829]: time="2025-01-14T13:23:49.945268765Z" level=info msg="RemovePodSandbox for \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:49.945826 containerd[1829]: time="2025-01-14T13:23:49.945299865Z" level=info msg="Forcibly stopping sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\"" Jan 14 13:23:49.945826 containerd[1829]: time="2025-01-14T13:23:49.945372466Z" level=info msg="TearDown network for sandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" successfully" Jan 14 13:23:49.957234 containerd[1829]: time="2025-01-14T13:23:49.957200946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.957324 containerd[1829]: time="2025-01-14T13:23:49.957251847Z" level=info msg="RemovePodSandbox \"b9355f4ba9202a2f2963714d3561b0ac7c5fef658111324ec0886f0dd51e0613\" returns successfully" Jan 14 13:23:49.957574 containerd[1829]: time="2025-01-14T13:23:49.957551151Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:49.957820 containerd[1829]: time="2025-01-14T13:23:49.957642152Z" level=info msg="TearDown network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" successfully" Jan 14 13:23:49.957820 containerd[1829]: time="2025-01-14T13:23:49.957658953Z" level=info msg="StopPodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" returns successfully" Jan 14 13:23:49.958376 containerd[1829]: time="2025-01-14T13:23:49.957980158Z" level=info msg="RemovePodSandbox for \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:49.958376 containerd[1829]: time="2025-01-14T13:23:49.958012158Z" level=info msg="Forcibly stopping sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\"" Jan 14 13:23:49.958376 containerd[1829]: time="2025-01-14T13:23:49.958078459Z" level=info msg="TearDown network for sandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" successfully" Jan 14 13:23:49.965201 containerd[1829]: time="2025-01-14T13:23:49.965146266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.965201 containerd[1829]: time="2025-01-14T13:23:49.965188867Z" level=info msg="RemovePodSandbox \"d8e3a6632ec892a2b504eb8ac51fefb61989dca700f48e50f93233645ac4508b\" returns successfully" Jan 14 13:23:49.965722 containerd[1829]: time="2025-01-14T13:23:49.965587673Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" Jan 14 13:23:49.965722 containerd[1829]: time="2025-01-14T13:23:49.965661174Z" level=info msg="TearDown network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" successfully" Jan 14 13:23:49.965722 containerd[1829]: time="2025-01-14T13:23:49.965671174Z" level=info msg="StopPodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" returns successfully" Jan 14 13:23:49.966069 containerd[1829]: time="2025-01-14T13:23:49.966046580Z" level=info msg="RemovePodSandbox for \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" Jan 14 13:23:49.966137 containerd[1829]: time="2025-01-14T13:23:49.966075081Z" level=info msg="Forcibly stopping sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\"" Jan 14 13:23:49.966455 containerd[1829]: time="2025-01-14T13:23:49.966160782Z" level=info msg="TearDown network for sandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" successfully" Jan 14 13:23:49.973310 containerd[1829]: time="2025-01-14T13:23:49.973266390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.973537 containerd[1829]: time="2025-01-14T13:23:49.973327591Z" level=info msg="RemovePodSandbox \"0adc38d1fbaf8061dd092cf09833ea86fc02acf1d5f0e9e64fbfba21766afee6\" returns successfully" Jan 14 13:23:49.973702 containerd[1829]: time="2025-01-14T13:23:49.973674596Z" level=info msg="StopPodSandbox for \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\"" Jan 14 13:23:49.973862 containerd[1829]: time="2025-01-14T13:23:49.973807398Z" level=info msg="TearDown network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" successfully" Jan 14 13:23:49.973862 containerd[1829]: time="2025-01-14T13:23:49.973828698Z" level=info msg="StopPodSandbox for \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" returns successfully" Jan 14 13:23:49.974306 containerd[1829]: time="2025-01-14T13:23:49.974278105Z" level=info msg="RemovePodSandbox for \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\"" Jan 14 13:23:49.974410 containerd[1829]: time="2025-01-14T13:23:49.974310006Z" level=info msg="Forcibly stopping sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\"" Jan 14 13:23:49.974469 containerd[1829]: time="2025-01-14T13:23:49.974396107Z" level=info msg="TearDown network for sandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" successfully" Jan 14 13:23:49.981551 containerd[1829]: time="2025-01-14T13:23:49.981519215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.981652 containerd[1829]: time="2025-01-14T13:23:49.981563516Z" level=info msg="RemovePodSandbox \"567b49addecc8510152bcd4afe7abb045f93f0196573187fc1ce2a57169ccce7\" returns successfully" Jan 14 13:23:49.981955 containerd[1829]: time="2025-01-14T13:23:49.981921821Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:49.982045 containerd[1829]: time="2025-01-14T13:23:49.982025423Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:49.982092 containerd[1829]: time="2025-01-14T13:23:49.982047023Z" level=info msg="StopPodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:49.982354 containerd[1829]: time="2025-01-14T13:23:49.982325527Z" level=info msg="RemovePodSandbox for \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:49.982438 containerd[1829]: time="2025-01-14T13:23:49.982354228Z" level=info msg="Forcibly stopping sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\"" Jan 14 13:23:49.982483 containerd[1829]: time="2025-01-14T13:23:49.982429329Z" level=info msg="TearDown network for sandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" successfully" Jan 14 13:23:49.988528 containerd[1829]: time="2025-01-14T13:23:49.988498321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.988650 containerd[1829]: time="2025-01-14T13:23:49.988543022Z" level=info msg="RemovePodSandbox \"67cacb37e7143291afe856dad8f7df0ede50208a6cc175e909d0d07250164f79\" returns successfully" Jan 14 13:23:49.988956 containerd[1829]: time="2025-01-14T13:23:49.988918627Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:49.989076 containerd[1829]: time="2025-01-14T13:23:49.989014929Z" level=info msg="TearDown network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" successfully" Jan 14 13:23:49.989170 containerd[1829]: time="2025-01-14T13:23:49.989076430Z" level=info msg="StopPodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" returns successfully" Jan 14 13:23:49.989449 containerd[1829]: time="2025-01-14T13:23:49.989352034Z" level=info msg="RemovePodSandbox for \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:49.989449 containerd[1829]: time="2025-01-14T13:23:49.989382334Z" level=info msg="Forcibly stopping sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\"" Jan 14 13:23:49.989575 containerd[1829]: time="2025-01-14T13:23:49.989458736Z" level=info msg="TearDown network for sandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" successfully" Jan 14 13:23:49.995565 containerd[1829]: time="2025-01-14T13:23:49.995537628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:49.995656 containerd[1829]: time="2025-01-14T13:23:49.995579429Z" level=info msg="RemovePodSandbox \"929ff4814eff9e148d52ed608e93664dde59015cf57c2bc09c18d9040c66c9f8\" returns successfully" Jan 14 13:23:49.995896 containerd[1829]: time="2025-01-14T13:23:49.995871733Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" Jan 14 13:23:49.996014 containerd[1829]: time="2025-01-14T13:23:49.995962134Z" level=info msg="TearDown network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" successfully" Jan 14 13:23:49.996014 containerd[1829]: time="2025-01-14T13:23:49.995980635Z" level=info msg="StopPodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" returns successfully" Jan 14 13:23:49.996269 containerd[1829]: time="2025-01-14T13:23:49.996238239Z" level=info msg="RemovePodSandbox for \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" Jan 14 13:23:49.996269 containerd[1829]: time="2025-01-14T13:23:49.996262539Z" level=info msg="Forcibly stopping sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\"" Jan 14 13:23:49.996433 containerd[1829]: time="2025-01-14T13:23:49.996330040Z" level=info msg="TearDown network for sandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" successfully" Jan 14 13:23:50.004339 containerd[1829]: time="2025-01-14T13:23:50.004264160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:50.004658 containerd[1829]: time="2025-01-14T13:23:50.004372862Z" level=info msg="RemovePodSandbox \"8f6a450f8da83fecfb7326f1f3f6e9d3c9a17d1cb6703bc2e8d8c7915ef67ed8\" returns successfully" Jan 14 13:23:50.004921 containerd[1829]: time="2025-01-14T13:23:50.004893270Z" level=info msg="StopPodSandbox for \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\"" Jan 14 13:23:50.005013 containerd[1829]: time="2025-01-14T13:23:50.004984371Z" level=info msg="TearDown network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" successfully" Jan 14 13:23:50.005013 containerd[1829]: time="2025-01-14T13:23:50.004999472Z" level=info msg="StopPodSandbox for \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" returns successfully" Jan 14 13:23:50.005334 containerd[1829]: time="2025-01-14T13:23:50.005265576Z" level=info msg="RemovePodSandbox for \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\"" Jan 14 13:23:50.005334 containerd[1829]: time="2025-01-14T13:23:50.005292676Z" level=info msg="Forcibly stopping sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\"" Jan 14 13:23:50.005467 containerd[1829]: time="2025-01-14T13:23:50.005368177Z" level=info msg="TearDown network for sandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" successfully" Jan 14 13:23:50.013418 containerd[1829]: time="2025-01-14T13:23:50.013393499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 13:23:50.013533 containerd[1829]: time="2025-01-14T13:23:50.013433500Z" level=info msg="RemovePodSandbox \"cd3613cfa7ddab54a20fe35776510dc7ee45404a987e608ce6e299963f035d8a\" returns successfully" Jan 14 13:23:50.922876 kubelet[2744]: E0114 13:23:50.922809 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:51.923651 kubelet[2744]: E0114 13:23:51.923592 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:52.924849 kubelet[2744]: E0114 13:23:52.924793 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:53.925227 kubelet[2744]: E0114 13:23:53.925159 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:54.925610 kubelet[2744]: E0114 13:23:54.925545 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:55.926137 kubelet[2744]: E0114 13:23:55.926074 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:56.926468 kubelet[2744]: E0114 13:23:56.926404 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:57.927103 kubelet[2744]: E0114 13:23:57.927012 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:58.919237 systemd[1]: run-containerd-runc-k8s.io-0f72256ae3defd6c5697835b8e53f5d4cee4c0cb2697b73312ccad6aadebf352-runc.xWxlRo.mount: Deactivated successfully. Jan 14 13:23:58.927734 kubelet[2744]: E0114 13:23:58.927600 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:23:59.928290 kubelet[2744]: E0114 13:23:59.928232 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:00.929295 kubelet[2744]: E0114 13:24:00.929230 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:00.965556 kubelet[2744]: I0114 13:24:00.965510 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.566771996 podStartE2EDuration="31.965462086s" podCreationTimestamp="2025-01-14 13:23:29 +0000 UTC" firstStartedPulling="2025-01-14 13:23:30.447931748 +0000 UTC m=+41.268122910" lastFinishedPulling="2025-01-14 13:23:35.846621738 +0000 UTC m=+46.666813000" observedRunningTime="2025-01-14 13:23:36.296431769 +0000 UTC m=+47.116623031" watchObservedRunningTime="2025-01-14 13:24:00.965462086 +0000 UTC m=+71.785653248" Jan 14 13:24:00.965891 kubelet[2744]: I0114 13:24:00.965859 2744 topology_manager.go:215] "Topology Admit Handler" podUID="771191ff-dd57-47b0-b59f-1748134f3583" podNamespace="default" podName="test-pod-1" Jan 14 13:24:01.037986 kubelet[2744]: I0114 13:24:01.037830 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-55c1a844-e14a-4664-a4a6-4af1c17f3112\" (UniqueName: \"kubernetes.io/nfs/771191ff-dd57-47b0-b59f-1748134f3583-pvc-55c1a844-e14a-4664-a4a6-4af1c17f3112\") pod \"test-pod-1\" (UID: \"771191ff-dd57-47b0-b59f-1748134f3583\") " pod="default/test-pod-1" Jan 14 13:24:01.037986 kubelet[2744]: I0114 13:24:01.037895 2744 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/771191ff-dd57-47b0-b59f-1748134f3583-kube-api-access-x8wf8\") pod \"test-pod-1\" (UID: \"771191ff-dd57-47b0-b59f-1748134f3583\") " pod="default/test-pod-1" Jan 14 13:24:01.321877 kernel: FS-Cache: Loaded Jan 14 13:24:01.480878 kernel: RPC: Registered named UNIX socket transport module. Jan 14 13:24:01.481025 kernel: RPC: Registered udp transport module. Jan 14 13:24:01.481049 kernel: RPC: Registered tcp transport module. Jan 14 13:24:01.484749 kernel: RPC: Registered tcp-with-tls transport module. Jan 14 13:24:01.484857 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 14 13:24:01.930322 kubelet[2744]: E0114 13:24:01.930253 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:01.951172 kernel: NFS: Registering the id_resolver key type Jan 14 13:24:01.951286 kernel: Key type id_resolver registered Jan 14 13:24:01.951308 kernel: Key type id_legacy registered Jan 14 13:24:02.100724 nfsidmap[4538]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-4236615464' Jan 14 13:24:02.116186 nfsidmap[4539]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-a-4236615464' Jan 14 13:24:02.170081 containerd[1829]: time="2025-01-14T13:24:02.170024297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:771191ff-dd57-47b0-b59f-1748134f3583,Namespace:default,Attempt:0,}" Jan 14 13:24:02.334564 systemd-networkd[1371]: cali5ec59c6bf6e: Link UP Jan 14 13:24:02.336569 systemd-networkd[1371]: cali5ec59c6bf6e: Gained carrier Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.237 [INFO][4541] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.33-k8s-test--pod--1-eth0 default 771191ff-dd57-47b0-b59f-1748134f3583 1495 0 2025-01-14 13:23:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.33 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.237 [INFO][4541] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.263 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" HandleID="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Workload="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.273 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" HandleID="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Workload="10.200.4.33-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd1e0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.33", "pod":"test-pod-1", "timestamp":"2025-01-14 13:24:02.263342943 +0000 UTC"}, Hostname:"10.200.4.33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.273 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.273 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.273 [INFO][4551] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.33' Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.274 [INFO][4551] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.282 [INFO][4551] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.288 [INFO][4551] ipam/ipam.go 489: Trying affinity for 192.168.102.192/26 host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.291 [INFO][4551] ipam/ipam.go 155: Attempting to load block cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.295 [INFO][4551] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.295 [INFO][4551] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.298 [INFO][4551] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.306 [INFO][4551] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.329 [INFO][4551] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.102.196/26] block=192.168.102.192/26 handle="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.329 [INFO][4551] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.102.196/26] handle="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" host="10.200.4.33" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.329 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.329 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.196/26] IPv6=[] ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" HandleID="k8s-pod-network.3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Workload="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.357000 containerd[1829]: 2025-01-14 13:24:02.331 [INFO][4541] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"771191ff-dd57-47b0-b59f-1748134f3583", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.102.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:24:02.358224 containerd[1829]: 2025-01-14 13:24:02.331 [INFO][4541] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.102.196/32] ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.358224 containerd[1829]: 2025-01-14 13:24:02.331 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.358224 containerd[1829]: 2025-01-14 13:24:02.333 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.358224 containerd[1829]: 2025-01-14 13:24:02.334 [INFO][4541] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.33-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"771191ff-dd57-47b0-b59f-1748134f3583", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.33", ContainerID:"3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.102.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"02:61:4b:6f:f7:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 13:24:02.358224 containerd[1829]: 2025-01-14 13:24:02.355 [INFO][4541] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.33-k8s-test--pod--1-eth0" Jan 14 13:24:02.398807 containerd[1829]: time="2025-01-14T13:24:02.388873887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:02.398807 containerd[1829]: time="2025-01-14T13:24:02.388964589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:02.398807 containerd[1829]: time="2025-01-14T13:24:02.388983589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:02.398807 containerd[1829]: time="2025-01-14T13:24:02.389098091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:02.455239 containerd[1829]: time="2025-01-14T13:24:02.455193614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:771191ff-dd57-47b0-b59f-1748134f3583,Namespace:default,Attempt:0,} returns sandbox id \"3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f\"" Jan 14 13:24:02.457075 containerd[1829]: time="2025-01-14T13:24:02.457046543Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 14 13:24:02.820055 containerd[1829]: time="2025-01-14T13:24:02.819999265Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:02.822798 containerd[1829]: time="2025-01-14T13:24:02.822505604Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 14 13:24:02.825151 containerd[1829]: time="2025-01-14T13:24:02.825113444Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 368.034701ms" Jan 14 13:24:02.825151 containerd[1829]: time="2025-01-14T13:24:02.825151645Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 14 13:24:02.827025 containerd[1829]: time="2025-01-14T13:24:02.826994573Z" level=info msg="CreateContainer within sandbox \"3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 14 13:24:02.855312 containerd[1829]: time="2025-01-14T13:24:02.855260211Z" level=info msg="CreateContainer within sandbox \"3b3edf9177ee1aab37023f13688ee5498c45de3f0f87763f53fd55137245767f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bd41bb2f466d633d2aa22f683ed7c99dd5401817a5afe51ce749c9e3d760d7f1\"" Jan 14 13:24:02.856010 containerd[1829]: time="2025-01-14T13:24:02.855922921Z" level=info msg="StartContainer for \"bd41bb2f466d633d2aa22f683ed7c99dd5401817a5afe51ce749c9e3d760d7f1\"" Jan 14 13:24:02.912194 containerd[1829]: time="2025-01-14T13:24:02.912125892Z" level=info msg="StartContainer for \"bd41bb2f466d633d2aa22f683ed7c99dd5401817a5afe51ce749c9e3d760d7f1\" returns successfully" Jan 14 13:24:02.931053 kubelet[2744]: E0114 13:24:02.930911 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:03.355560 kubelet[2744]: I0114 13:24:03.355508 2744 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.986620845 podStartE2EDuration="32.355467658s" podCreationTimestamp="2025-01-14 13:23:31 +0000 UTC" firstStartedPulling="2025-01-14 13:24:02.456571936 +0000 UTC m=+73.276763098" lastFinishedPulling="2025-01-14 13:24:02.825418749 +0000 UTC m=+73.645609911" observedRunningTime="2025-01-14 13:24:03.355303956 +0000 UTC m=+74.175495118" watchObservedRunningTime="2025-01-14 13:24:03.355467658 +0000 UTC m=+74.175658820" Jan 14 13:24:03.931552 kubelet[2744]: E0114 13:24:03.931487 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:03.982047 systemd-networkd[1371]: cali5ec59c6bf6e: Gained IPv6LL Jan 14 13:24:04.932088 kubelet[2744]: E0114 13:24:04.932035 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:05.932306 kubelet[2744]: E0114 13:24:05.932239 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:06.933486 kubelet[2744]: E0114 13:24:06.933422 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:07.934562 kubelet[2744]: E0114 13:24:07.934499 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:08.934910 kubelet[2744]: E0114 13:24:08.934824 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:09.875076 kubelet[2744]: E0114 13:24:09.875014 2744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 14 13:24:09.935465 kubelet[2744]: E0114 13:24:09.935414 2744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"