Jan 29 11:59:28.214595 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:59:28.214635 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.214645 kernel: BIOS-provided physical RAM map: Jan 29 11:59:28.214652 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:59:28.214657 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 11:59:28.214663 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 11:59:28.214671 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 11:59:28.214680 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 11:59:28.214686 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 11:59:28.214692 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 11:59:28.214698 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 11:59:28.214704 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 11:59:28.214711 kernel: printk: bootconsole [earlyser0] enabled Jan 29 11:59:28.214717 kernel: NX (Execute Disable) protection: active Jan 29 11:59:28.214728 kernel: APIC: Static calls initialized Jan 29 11:59:28.214735 kernel: efi: EFI v2.7 by Microsoft Jan 29 11:59:28.214743 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 29 11:59:28.214750 kernel: SMBIOS 3.1.0 present. Jan 29 11:59:28.214757 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 11:59:28.214764 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 11:59:28.214771 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 11:59:28.214779 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 11:59:28.214786 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 11:59:28.214793 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 11:59:28.214802 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 11:59:28.214809 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 11:59:28.214817 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 11:59:28.214824 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 11:59:28.214832 kernel: tsc: Detected 2593.904 MHz processor Jan 29 11:59:28.214840 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:59:28.214847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:59:28.214854 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 11:59:28.214862 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:59:28.214871 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:59:28.214879 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 11:59:28.214886 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 11:59:28.214893 kernel: Using GB pages for direct mapping Jan 29 11:59:28.214900 kernel: Secure boot disabled Jan 29 11:59:28.214907 kernel: ACPI: Early table checksum verification disabled Jan 29 11:59:28.214915 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 11:59:28.214925 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214935 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214943 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 11:59:28.214950 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 11:59:28.214958 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214966 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214974 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214984 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214992 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215000 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215008 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215015 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 11:59:28.215023 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 11:59:28.215031 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 11:59:28.215038 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 11:59:28.215048 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 11:59:28.215056 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 11:59:28.215064 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 11:59:28.215071 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 11:59:28.215079 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 11:59:28.215087 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 11:59:28.215094 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:59:28.215102 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:59:28.215109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 11:59:28.215119 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 11:59:28.215127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 11:59:28.215135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 11:59:28.215142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 11:59:28.215150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 11:59:28.215158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 11:59:28.215165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 11:59:28.215173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 11:59:28.215180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 11:59:28.215190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 11:59:28.215198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 11:59:28.215206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 11:59:28.215213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 11:59:28.215221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 11:59:28.215229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 11:59:28.215237 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 11:59:28.215244 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 11:59:28.215252 kernel: Zone ranges: Jan 29 11:59:28.215262 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:59:28.215270 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:59:28.215278 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 11:59:28.215285 kernel: Movable zone start for each node Jan 29 11:59:28.215293 kernel: Early memory node ranges Jan 29 11:59:28.215301 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:59:28.215309 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 11:59:28.215317 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 11:59:28.215324 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 11:59:28.215334 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 11:59:28.215342 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:59:28.215349 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:59:28.215357 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 11:59:28.215364 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 11:59:28.215372 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 11:59:28.215379 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:59:28.215387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:59:28.215395 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:59:28.215405 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 11:59:28.215412 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:59:28.215420 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 11:59:28.215428 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 11:59:28.215436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:59:28.215444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:59:28.215451 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:59:28.215459 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:59:28.215466 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:59:28.215477 kernel: Hyper-V: PV spinlocks enabled Jan 29 11:59:28.215485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:59:28.215494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.215503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:59:28.215511 kernel: random: crng init done Jan 29 11:59:28.215518 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 11:59:28.215526 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:59:28.215534 kernel: Fallback order for Node 0: 0 Jan 29 11:59:28.215544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 11:59:28.215568 kernel: Policy zone: Normal Jan 29 11:59:28.215578 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:59:28.215587 kernel: software IO TLB: area num 2. Jan 29 11:59:28.215596 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 29 11:59:28.215604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:59:28.215612 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:59:28.215620 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:59:28.215629 kernel: Dynamic Preempt: voluntary Jan 29 11:59:28.215637 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:59:28.215646 kernel: rcu: RCU event tracing is enabled. Jan 29 11:59:28.215657 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:59:28.215665 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:59:28.215673 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:59:28.215681 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:59:28.215689 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:59:28.215700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:59:28.215708 kernel: Using NULL legacy PIC Jan 29 11:59:28.215716 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 11:59:28.215725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:59:28.215733 kernel: Console: colour dummy device 80x25 Jan 29 11:59:28.215741 kernel: printk: console [tty1] enabled Jan 29 11:59:28.215749 kernel: printk: console [ttyS0] enabled Jan 29 11:59:28.215758 kernel: printk: bootconsole [earlyser0] disabled Jan 29 11:59:28.215766 kernel: ACPI: Core revision 20230628 Jan 29 11:59:28.215774 kernel: Failed to register legacy timer interrupt Jan 29 11:59:28.215785 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:59:28.215793 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 11:59:28.215801 kernel: Hyper-V: Using IPI hypercalls Jan 29 11:59:28.215809 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 11:59:28.215817 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 11:59:28.215826 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 11:59:28.215834 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 11:59:28.215842 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 11:59:28.215851 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 11:59:28.215862 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 29 11:59:28.215870 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:59:28.215878 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:59:28.215887 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:59:28.215895 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:59:28.215903 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:59:28.215910 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:59:28.215919 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 11:59:28.215927 kernel: RETBleed: Vulnerable Jan 29 11:59:28.215937 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:59:28.215945 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:59:28.215953 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:59:28.215960 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:59:28.215968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:59:28.215977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:59:28.215985 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:59:28.215993 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 11:59:28.216001 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 11:59:28.216009 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 11:59:28.216017 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:59:28.216027 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 11:59:28.216035 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 11:59:28.216043 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 11:59:28.216051 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 11:59:28.216059 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:59:28.216067 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:59:28.216075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:59:28.216084 kernel: landlock: Up and running. Jan 29 11:59:28.216092 kernel: SELinux: Initializing. Jan 29 11:59:28.216100 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.216108 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.216117 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 11:59:28.216127 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216135 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216144 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216152 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 11:59:28.216160 kernel: signal: max sigframe size: 3632 Jan 29 11:59:28.216168 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:59:28.216177 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:59:28.216185 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:59:28.216193 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:59:28.216203 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:59:28.216212 kernel: .... node #0, CPUs: #1 Jan 29 11:59:28.216220 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 11:59:28.216229 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:59:28.216237 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:59:28.216245 kernel: smpboot: Max logical packages: 1 Jan 29 11:59:28.216253 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 29 11:59:28.216261 kernel: devtmpfs: initialized Jan 29 11:59:28.216272 kernel: x86/mm: Memory block size: 128MB Jan 29 11:59:28.216280 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 11:59:28.216288 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:59:28.216297 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:59:28.216305 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:59:28.216314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:59:28.216322 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:59:28.216330 kernel: audit: type=2000 audit(1738151966.028:1): state=initialized audit_enabled=0 res=1 Jan 29 11:59:28.216338 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:59:28.216348 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:59:28.216356 kernel: cpuidle: using governor menu Jan 29 11:59:28.216364 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:59:28.216373 kernel: dca service started, version 1.12.1 Jan 29 11:59:28.216381 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 11:59:28.216390 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:59:28.216398 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:59:28.216406 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:59:28.216414 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:59:28.216424 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:59:28.216432 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:59:28.216440 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:59:28.216448 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:59:28.216456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:59:28.216465 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:59:28.216473 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:59:28.216481 kernel: ACPI: Interpreter enabled Jan 29 11:59:28.216489 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:59:28.216500 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:59:28.216509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:59:28.216517 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 11:59:28.216525 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 11:59:28.216534 kernel: iommu: Default domain type: Translated Jan 29 11:59:28.216542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:59:28.216550 kernel: efivars: Registered efivars operations Jan 29 11:59:28.222316 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:59:28.222327 kernel: PCI: System does not support PCI Jan 29 11:59:28.222347 kernel: vgaarb: loaded Jan 29 11:59:28.222356 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 11:59:28.222364 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:59:28.222373 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:59:28.222381 kernel: pnp: PnP ACPI init Jan 29 11:59:28.222390 kernel: pnp: PnP ACPI: found 3 devices Jan 29 11:59:28.222398 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:59:28.222407 kernel: NET: Registered PF_INET protocol family Jan 29 11:59:28.222415 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:59:28.222426 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 11:59:28.222435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:59:28.222443 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:59:28.222451 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:59:28.222459 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 11:59:28.222468 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.222476 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.222484 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:59:28.222492 kernel: NET: Registered PF_XDP protocol family Jan 29 11:59:28.222503 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:59:28.222511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:59:28.222519 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 29 11:59:28.222527 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:59:28.222535 kernel: Initialise system trusted keyrings Jan 29 11:59:28.222544 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 11:59:28.222558 kernel: Key type asymmetric registered Jan 29 11:59:28.222567 kernel: Asymmetric key parser 'x509' registered Jan 29 11:59:28.222575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:59:28.222586 kernel: io scheduler mq-deadline registered Jan 29 11:59:28.222594 kernel: io scheduler kyber registered Jan 29 11:59:28.222602 kernel: io scheduler bfq registered Jan 29 11:59:28.222610 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:59:28.222618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:59:28.222627 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:59:28.222635 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:59:28.222643 kernel: i8042: PNP: No PS/2 controller found. Jan 29 11:59:28.222819 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 11:59:28.222910 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T11:59:27 UTC (1738151967) Jan 29 11:59:28.222986 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 11:59:28.222997 kernel: intel_pstate: CPU model not supported Jan 29 11:59:28.223005 kernel: efifb: probing for efifb Jan 29 11:59:28.223014 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 11:59:28.223022 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 11:59:28.223030 kernel: efifb: scrolling: redraw Jan 29 11:59:28.223041 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:59:28.223049 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:59:28.223057 kernel: fb0: EFI VGA frame buffer device Jan 29 11:59:28.223065 kernel: pstore: Using crash dump compression: deflate Jan 29 11:59:28.223074 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:59:28.223082 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:59:28.223090 kernel: Segment Routing with IPv6 Jan 29 11:59:28.223099 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:59:28.223107 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:59:28.223116 kernel: Key type dns_resolver registered Jan 29 11:59:28.223126 kernel: IPI shorthand broadcast: enabled Jan 29 11:59:28.223134 kernel: sched_clock: Marking stable (1081111300, 60917200)->(1435723800, -293695300) Jan 29 11:59:28.223142 kernel: registered taskstats version 1 Jan 29 11:59:28.223151 kernel: Loading compiled-in X.509 certificates Jan 29 11:59:28.223159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:59:28.223167 kernel: Key type .fscrypt registered Jan 29 11:59:28.223175 kernel: Key type fscrypt-provisioning registered Jan 29 11:59:28.223184 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:59:28.223194 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:59:28.223202 kernel: ima: No architecture policies found Jan 29 11:59:28.223210 kernel: clk: Disabling unused clocks Jan 29 11:59:28.223219 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:59:28.223227 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:59:28.223235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:59:28.223243 kernel: Run /init as init process Jan 29 11:59:28.223251 kernel: with arguments: Jan 29 11:59:28.223259 kernel: /init Jan 29 11:59:28.223270 kernel: with environment: Jan 29 11:59:28.223278 kernel: HOME=/ Jan 29 11:59:28.223286 kernel: TERM=linux Jan 29 11:59:28.223294 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:59:28.223304 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:59:28.223315 systemd[1]: Detected virtualization microsoft. Jan 29 11:59:28.223324 systemd[1]: Detected architecture x86-64. Jan 29 11:59:28.223333 systemd[1]: Running in initrd. Jan 29 11:59:28.223344 systemd[1]: No hostname configured, using default hostname. Jan 29 11:59:28.223352 systemd[1]: Hostname set to . Jan 29 11:59:28.223361 systemd[1]: Initializing machine ID from random generator. Jan 29 11:59:28.223369 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:59:28.223378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:28.223387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:28.223398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:59:28.223406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:59:28.223417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:59:28.223426 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:59:28.223436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:59:28.223444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:59:28.223453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:28.223462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:28.223470 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:59:28.223481 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:59:28.223490 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:59:28.223499 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:59:28.223507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:59:28.223516 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:59:28.223524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:59:28.223533 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:59:28.223542 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:28.223561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:28.223570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:28.223578 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:59:28.223587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:59:28.223596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:59:28.223604 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:59:28.223613 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:59:28.223621 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:59:28.223630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:59:28.223641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:28.223672 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 11:59:28.223694 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:59:28.223705 systemd-journald[176]: Journal started Jan 29 11:59:28.223729 systemd-journald[176]: Runtime Journal (/run/log/journal/52bac1527afb452c8acb0aa10b858077) is 8.0M, max 158.8M, 150.8M free. Jan 29 11:59:28.235589 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:59:28.236252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:28.237546 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:59:28.247999 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 11:59:28.251997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:59:28.260868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:59:28.285530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:59:28.293918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:28.306579 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:59:28.310504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:28.316938 kernel: Bridge firewalling registered Jan 29 11:59:28.310715 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 11:59:28.314020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:28.329764 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:28.334718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:28.342081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:59:28.361371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:28.362859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:28.373894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:59:28.379903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:28.385697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:59:28.409363 dracut-cmdline[213]: dracut-dracut-053 Jan 29 11:59:28.412871 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.438330 systemd-resolved[210]: Positive Trust Anchors: Jan 29 11:59:28.438351 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:59:28.438408 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:59:28.442225 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 29 11:59:28.443455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:59:28.463977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:28.500583 kernel: SCSI subsystem initialized Jan 29 11:59:28.510583 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:59:28.522586 kernel: iscsi: registered transport (tcp) Jan 29 11:59:28.544511 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:59:28.544627 kernel: QLogic iSCSI HBA Driver Jan 29 11:59:28.581597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:59:28.593749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:59:28.623918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:59:28.624031 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:59:28.630583 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:59:28.672946 kernel: raid6: avx512x4 gen() 18411 MB/s Jan 29 11:59:28.692590 kernel: raid6: avx512x2 gen() 15021 MB/s Jan 29 11:59:28.711567 kernel: raid6: avx512x1 gen() 18010 MB/s Jan 29 11:59:28.730567 kernel: raid6: avx2x4 gen() 18177 MB/s Jan 29 11:59:28.750916 kernel: raid6: avx2x2 gen() 18222 MB/s Jan 29 11:59:28.773150 kernel: raid6: avx2x1 gen() 9852 MB/s Jan 29 11:59:28.773264 kernel: raid6: using algorithm avx512x4 gen() 18411 MB/s Jan 29 11:59:28.794297 kernel: raid6: .... xor() 5971 MB/s, rmw enabled Jan 29 11:59:28.794396 kernel: raid6: using avx512x2 recovery algorithm Jan 29 11:59:28.817582 kernel: xor: automatically using best checksumming function avx Jan 29 11:59:29.000660 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:59:29.016688 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:59:29.031993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:29.052382 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 29 11:59:29.057481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:29.071789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:59:29.086092 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 29 11:59:29.114658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:59:29.123728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:59:29.168935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:29.185843 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:59:29.224678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:59:29.231530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:59:29.236036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:29.245098 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:59:29.256153 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:59:29.269118 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:59:29.290170 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:59:29.306235 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:59:29.306338 kernel: AES CTR mode by8 optimization enabled Jan 29 11:59:29.309202 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 11:59:29.313602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:59:29.316159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:29.323182 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:29.329359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:29.329451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.334737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.352586 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:59:29.352655 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:59:29.352772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.619361 kernel: PTP clock support registered Jan 29 11:59:29.619401 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 11:59:29.619431 kernel: hv_vmbus: registering driver hv_utils Jan 29 11:59:29.619448 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 11:59:29.619465 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 11:59:29.619482 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 11:59:29.619500 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 11:59:29.619516 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:59:29.619671 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 11:59:29.619695 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 11:59:29.619720 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 11:59:29.619737 kernel: scsi host1: storvsc_host_t Jan 29 11:59:29.619954 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 11:59:29.619971 kernel: scsi host0: storvsc_host_t Jan 29 11:59:29.620144 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 11:59:29.620335 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 11:59:29.620547 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 11:59:29.549471 systemd-resolved[210]: Clock change detected. Flushing caches. Jan 29 11:59:29.628492 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 11:59:29.633867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:29.639138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.658648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.687844 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 11:59:29.690385 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:59:29.690410 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 11:59:29.698694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.708993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:29.733404 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 11:59:29.756753 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:59:29.756914 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:59:29.757019 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 11:59:29.757120 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 11:59:29.757218 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:29.757231 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:59:29.747209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:29.778478 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: VF slot 1 added Jan 29 11:59:29.792595 kernel: hv_vmbus: registering driver hv_pci Jan 29 11:59:29.797456 kernel: hv_pci 67e09304-190a-43d8-916b-6f0d7b063831: PCI VMBus probing: Using version 0x10004 Jan 29 11:59:29.849991 kernel: hv_pci 67e09304-190a-43d8-916b-6f0d7b063831: PCI host bridge to bus 190a:00 Jan 29 11:59:29.850207 kernel: pci_bus 190a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 11:59:29.850579 kernel: pci_bus 190a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 11:59:29.850801 kernel: pci 190a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 11:59:29.851008 kernel: pci 190a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 11:59:29.851192 kernel: pci 190a:00:02.0: enabling Extended Tags Jan 29 11:59:29.851355 kernel: pci 190a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 190a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 11:59:29.852170 kernel: pci_bus 190a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 11:59:29.852346 kernel: pci 190a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 11:59:30.034791 kernel: mlx5_core 190a:00:02.0: enabling device (0000 -> 0002) Jan 29 11:59:30.312769 kernel: mlx5_core 190a:00:02.0: firmware version: 14.30.5000 Jan 29 11:59:30.313019 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Jan 29 11:59:30.313042 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (443) Jan 29 11:59:30.313063 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: VF registering: eth1 Jan 29 11:59:30.313247 kernel: mlx5_core 190a:00:02.0 eth1: joined to eth0 Jan 29 11:59:30.313462 kernel: mlx5_core 190a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:59:30.214461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 11:59:30.254759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 11:59:30.324646 kernel: mlx5_core 190a:00:02.0 enP6410s1: renamed from eth1 Jan 29 11:59:30.289695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:59:30.300000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 11:59:30.307176 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 11:59:30.320976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:59:30.353459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:30.363446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:31.379798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:31.383064 disk-uuid[605]: The operation has completed successfully. Jan 29 11:59:31.491254 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:59:31.491375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:59:31.514634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:59:31.520360 sh[718]: Success Jan 29 11:59:31.550854 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:59:31.753570 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:59:31.771600 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:59:31.788996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:59:31.817923 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:59:31.818015 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:31.821787 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:59:31.824638 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:59:31.827151 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:59:32.128644 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:59:32.134822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:59:32.147686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:59:32.156942 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:59:32.171544 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:32.171601 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:32.176367 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:32.193763 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:32.208525 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:32.208944 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:59:32.218125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:59:32.229699 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:59:32.256080 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:32.264791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:59:32.285092 systemd-networkd[902]: lo: Link UP Jan 29 11:59:32.285103 systemd-networkd[902]: lo: Gained carrier Jan 29 11:59:32.287376 systemd-networkd[902]: Enumeration completed Jan 29 11:59:32.287500 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:59:32.289666 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:32.289670 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:32.292062 systemd[1]: Reached target network.target - Network. Jan 29 11:59:32.370462 kernel: mlx5_core 190a:00:02.0 enP6410s1: Link up Jan 29 11:59:32.408824 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: Data path switched to VF: enP6410s1 Jan 29 11:59:32.410958 systemd-networkd[902]: enP6410s1: Link UP Jan 29 11:59:32.411077 systemd-networkd[902]: eth0: Link UP Jan 29 11:59:32.411233 systemd-networkd[902]: eth0: Gained carrier Jan 29 11:59:32.411247 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:32.430348 systemd-networkd[902]: enP6410s1: Gained carrier Jan 29 11:59:32.470099 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 11:59:32.920392 ignition[859]: Ignition 2.19.0 Jan 29 11:59:32.920405 ignition[859]: Stage: fetch-offline Jan 29 11:59:32.920487 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:32.920500 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:32.920632 ignition[859]: parsed url from cmdline: "" Jan 29 11:59:32.920638 ignition[859]: no config URL provided Jan 29 11:59:32.920644 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:32.920655 ignition[859]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:32.920663 ignition[859]: failed to fetch config: resource requires networking Jan 29 11:59:32.923687 ignition[859]: Ignition finished successfully Jan 29 11:59:32.946133 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:32.961817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:59:32.984753 ignition[910]: Ignition 2.19.0 Jan 29 11:59:32.984765 ignition[910]: Stage: fetch Jan 29 11:59:32.985029 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:32.985042 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:32.985665 ignition[910]: parsed url from cmdline: "" Jan 29 11:59:32.986405 ignition[910]: no config URL provided Jan 29 11:59:32.986869 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:32.987000 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:32.987300 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 11:59:33.083340 ignition[910]: GET result: OK Jan 29 11:59:33.083478 ignition[910]: config has been read from IMDS userdata Jan 29 11:59:33.088259 unknown[910]: fetched base config from "system" Jan 29 11:59:33.083514 ignition[910]: parsing config with SHA512: 868c433d67fad66123c4dc4ac69ca682accc0592e20f1dc903b81276b4793e996539dd57dcf8067a28e0319ab9b0389049abf518cfc1b89aedbe16dab35bdba7 Jan 29 11:59:33.088266 unknown[910]: fetched base config from "system" Jan 29 11:59:33.088663 ignition[910]: fetch: fetch complete Jan 29 11:59:33.088271 unknown[910]: fetched user config from "azure" Jan 29 11:59:33.088668 ignition[910]: fetch: fetch passed Jan 29 11:59:33.090253 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:59:33.088714 ignition[910]: Ignition finished successfully Jan 29 11:59:33.119639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:59:33.143097 ignition[916]: Ignition 2.19.0 Jan 29 11:59:33.143111 ignition[916]: Stage: kargs Jan 29 11:59:33.143366 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:33.147008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:59:33.143384 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:33.144869 ignition[916]: kargs: kargs passed Jan 29 11:59:33.144930 ignition[916]: Ignition finished successfully Jan 29 11:59:33.166644 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:59:33.184760 ignition[922]: Ignition 2.19.0 Jan 29 11:59:33.184774 ignition[922]: Stage: disks Jan 29 11:59:33.187995 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:59:33.185076 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:33.192578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:33.185092 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:33.199193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:59:33.186128 ignition[922]: disks: disks passed Jan 29 11:59:33.203857 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:59:33.186186 ignition[922]: Ignition finished successfully Jan 29 11:59:33.203907 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:33.203931 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:33.222755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:59:33.286921 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 11:59:33.300637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:59:33.313686 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:59:33.404466 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:59:33.405189 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:59:33.410406 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:59:33.444594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:33.456587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:59:33.466644 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:59:33.476525 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:59:33.476584 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:33.493463 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Jan 29 11:59:33.501272 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:33.501361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:33.504436 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:33.505359 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:59:33.512011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:59:33.520827 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:33.521265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:33.662640 systemd-networkd[902]: enP6410s1: Gained IPv6LL Jan 29 11:59:33.726675 systemd-networkd[902]: eth0: Gained IPv6LL Jan 29 11:59:34.106573 coreos-metadata[943]: Jan 29 11:59:34.106 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:59:34.117945 coreos-metadata[943]: Jan 29 11:59:34.117 INFO Fetch successful Jan 29 11:59:34.121549 coreos-metadata[943]: Jan 29 11:59:34.121 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:59:34.141165 coreos-metadata[943]: Jan 29 11:59:34.141 INFO Fetch successful Jan 29 11:59:34.154283 coreos-metadata[943]: Jan 29 11:59:34.154 INFO wrote hostname ci-4081.3.0-a-b5939ece28 to /sysroot/etc/hostname Jan 29 11:59:34.157038 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:34.217009 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:59:34.249494 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:59:34.269414 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:59:34.287413 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:59:35.083053 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:35.092582 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:59:35.099577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:59:35.110636 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:59:35.118143 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:35.153487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:59:35.158469 ignition[1058]: INFO : Ignition 2.19.0 Jan 29 11:59:35.158469 ignition[1058]: INFO : Stage: mount Jan 29 11:59:35.158469 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:35.158469 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:35.158469 ignition[1058]: INFO : mount: mount passed Jan 29 11:59:35.158469 ignition[1058]: INFO : Ignition finished successfully Jan 29 11:59:35.167130 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:59:35.175671 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:59:35.190919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:35.207443 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1070) Jan 29 11:59:35.211441 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:35.211502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:35.215792 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:35.223439 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:35.225854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:35.254352 ignition[1087]: INFO : Ignition 2.19.0 Jan 29 11:59:35.254352 ignition[1087]: INFO : Stage: files Jan 29 11:59:35.259449 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:35.259449 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:35.259449 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:59:35.279491 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:59:35.279491 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:59:35.365189 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:59:35.371886 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:59:35.371886 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:59:35.365777 unknown[1087]: wrote ssh authorized keys file for user: core Jan 29 11:59:35.409647 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:59:35.462864 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:59:35.587831 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:59:36.027133 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:59:36.384648 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:36.384648 ignition[1087]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:59:36.398853 ignition[1087]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: files passed Jan 29 11:59:36.413390 ignition[1087]: INFO : Ignition finished successfully Jan 29 11:59:36.404575 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:59:36.443648 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:59:36.472631 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:59:36.483572 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:59:36.483669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:59:36.533095 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.533095 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.545390 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.540870 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:36.556407 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:59:36.567106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:59:36.630601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:59:36.630861 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:59:36.636786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:59:36.639050 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:59:36.642261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:59:36.645678 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:59:36.663738 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:36.683621 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:59:36.697981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:36.703871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:36.709835 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:59:36.712948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:59:36.713114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:36.727266 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:59:36.734215 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:59:36.741565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:59:36.748758 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:36.758161 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:36.766128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:59:36.771482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:59:36.777498 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:59:36.782970 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:59:36.789635 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:59:36.800971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:59:36.801132 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:59:36.813842 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:36.821340 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:36.828257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:59:36.829012 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:36.840716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:59:36.842050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:59:36.853795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:59:36.856650 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:36.858006 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:59:36.858106 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:59:36.868836 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:59:36.875577 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:36.903947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:59:36.907410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:59:36.912742 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:36.928639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:59:36.931783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:59:36.932033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:36.939157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:59:36.939271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:59:36.954506 ignition[1139]: INFO : Ignition 2.19.0 Jan 29 11:59:36.954506 ignition[1139]: INFO : Stage: umount Jan 29 11:59:36.958841 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:36.958841 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:36.958841 ignition[1139]: INFO : umount: umount passed Jan 29 11:59:36.958841 ignition[1139]: INFO : Ignition finished successfully Jan 29 11:59:36.972836 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:59:36.972970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:59:36.988059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:59:36.990852 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:59:36.997765 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:59:36.998307 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:59:37.007527 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:59:37.007629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:59:37.015172 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:59:37.015258 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:59:37.022255 systemd[1]: Stopped target network.target - Network. Jan 29 11:59:37.026378 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:59:37.026498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:37.031733 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:59:37.033798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:59:37.039351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:37.042272 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:59:37.053340 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:59:37.056525 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:59:37.056580 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:59:37.064708 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:59:37.064782 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:59:37.070503 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:59:37.072303 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:59:37.079117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:59:37.079193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:59:37.084083 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:59:37.088978 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:59:37.098164 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:59:37.111479 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 29 11:59:37.114351 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:59:37.114505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:59:37.122601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:59:37.124895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:37.136563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:59:37.138943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:59:37.139024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:37.147770 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:37.151031 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:59:37.151174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:59:37.174315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:59:37.174505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:37.178226 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:59:37.178281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:37.185007 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:59:37.185173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:37.199195 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:59:37.199346 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:37.236878 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: Data path switched from VF: enP6410s1 Jan 29 11:59:37.216579 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:59:37.217134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:37.222842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:59:37.223635 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:37.232940 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:59:37.233019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:59:37.241173 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:59:37.241251 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:59:37.263717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:59:37.263841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:37.282757 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:59:37.287557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:59:37.287729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:37.291464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:37.291546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:37.305562 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:59:37.305716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:59:37.320266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:59:37.320396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:59:37.592968 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:59:37.593110 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:59:37.599295 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:59:37.605500 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:59:37.605599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:37.633704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:59:38.083701 systemd[1]: Switching root. Jan 29 11:59:38.122504 systemd-journald[176]: Journal stopped Jan 29 11:59:28.214595 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:59:28.214635 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.214645 kernel: BIOS-provided physical RAM map: Jan 29 11:59:28.214652 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:59:28.214657 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 11:59:28.214663 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 11:59:28.214671 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 11:59:28.214680 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 11:59:28.214686 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 11:59:28.214692 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 11:59:28.214698 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 11:59:28.214704 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 11:59:28.214711 kernel: printk: bootconsole [earlyser0] enabled Jan 29 11:59:28.214717 kernel: NX (Execute Disable) protection: active Jan 29 11:59:28.214728 kernel: APIC: Static calls initialized Jan 29 11:59:28.214735 kernel: efi: EFI v2.7 by Microsoft Jan 29 11:59:28.214743 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 29 11:59:28.214750 kernel: SMBIOS 3.1.0 present. Jan 29 11:59:28.214757 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 11:59:28.214764 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 11:59:28.214771 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 11:59:28.214779 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 11:59:28.214786 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 11:59:28.214793 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 11:59:28.214802 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 11:59:28.214809 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 11:59:28.214817 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 11:59:28.214824 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 11:59:28.214832 kernel: tsc: Detected 2593.904 MHz processor Jan 29 11:59:28.214840 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:59:28.214847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:59:28.214854 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 11:59:28.214862 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:59:28.214871 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:59:28.214879 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 11:59:28.214886 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 11:59:28.214893 kernel: Using GB pages for direct mapping Jan 29 11:59:28.214900 kernel: Secure boot disabled Jan 29 11:59:28.214907 kernel: ACPI: Early table checksum verification disabled Jan 29 11:59:28.214915 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 11:59:28.214925 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214935 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214943 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 11:59:28.214950 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 11:59:28.214958 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214966 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214974 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214984 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.214992 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215000 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215008 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:59:28.215015 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 11:59:28.215023 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 11:59:28.215031 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 11:59:28.215038 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 11:59:28.215048 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 11:59:28.215056 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 11:59:28.215064 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 11:59:28.215071 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 11:59:28.215079 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 11:59:28.215087 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 11:59:28.215094 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:59:28.215102 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:59:28.215109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 11:59:28.215119 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 11:59:28.215127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 11:59:28.215135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 11:59:28.215142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 11:59:28.215150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 11:59:28.215158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 11:59:28.215165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 11:59:28.215173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 11:59:28.215180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 11:59:28.215190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 11:59:28.215198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 11:59:28.215206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 11:59:28.215213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 11:59:28.215221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 11:59:28.215229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 11:59:28.215237 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 11:59:28.215244 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 11:59:28.215252 kernel: Zone ranges: Jan 29 11:59:28.215262 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:59:28.215270 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:59:28.215278 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 11:59:28.215285 kernel: Movable zone start for each node Jan 29 11:59:28.215293 kernel: Early memory node ranges Jan 29 11:59:28.215301 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:59:28.215309 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 11:59:28.215317 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 11:59:28.215324 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 11:59:28.215334 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 11:59:28.215342 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:59:28.215349 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:59:28.215357 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 11:59:28.215364 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 11:59:28.215372 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 11:59:28.215379 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:59:28.215387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:59:28.215395 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:59:28.215405 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 11:59:28.215412 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:59:28.215420 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 11:59:28.215428 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 11:59:28.215436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:59:28.215444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:59:28.215451 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:59:28.215459 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:59:28.215466 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:59:28.215477 kernel: Hyper-V: PV spinlocks enabled Jan 29 11:59:28.215485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:59:28.215494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.215503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:59:28.215511 kernel: random: crng init done Jan 29 11:59:28.215518 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 11:59:28.215526 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:59:28.215534 kernel: Fallback order for Node 0: 0 Jan 29 11:59:28.215544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 11:59:28.215568 kernel: Policy zone: Normal Jan 29 11:59:28.215578 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:59:28.215587 kernel: software IO TLB: area num 2. Jan 29 11:59:28.215596 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 29 11:59:28.215604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:59:28.215612 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:59:28.215620 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:59:28.215629 kernel: Dynamic Preempt: voluntary Jan 29 11:59:28.215637 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:59:28.215646 kernel: rcu: RCU event tracing is enabled. Jan 29 11:59:28.215657 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:59:28.215665 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:59:28.215673 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:59:28.215681 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:59:28.215689 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:59:28.215700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:59:28.215708 kernel: Using NULL legacy PIC Jan 29 11:59:28.215716 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 11:59:28.215725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:59:28.215733 kernel: Console: colour dummy device 80x25 Jan 29 11:59:28.215741 kernel: printk: console [tty1] enabled Jan 29 11:59:28.215749 kernel: printk: console [ttyS0] enabled Jan 29 11:59:28.215758 kernel: printk: bootconsole [earlyser0] disabled Jan 29 11:59:28.215766 kernel: ACPI: Core revision 20230628 Jan 29 11:59:28.215774 kernel: Failed to register legacy timer interrupt Jan 29 11:59:28.215785 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:59:28.215793 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 11:59:28.215801 kernel: Hyper-V: Using IPI hypercalls Jan 29 11:59:28.215809 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 11:59:28.215817 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 11:59:28.215826 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 11:59:28.215834 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 11:59:28.215842 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 11:59:28.215851 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 11:59:28.215862 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jan 29 11:59:28.215870 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:59:28.215878 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:59:28.215887 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:59:28.215895 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:59:28.215903 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:59:28.215910 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:59:28.215919 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 11:59:28.215927 kernel: RETBleed: Vulnerable Jan 29 11:59:28.215937 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:59:28.215945 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:59:28.215953 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:59:28.215960 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:59:28.215968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:59:28.215977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:59:28.215985 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:59:28.215993 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 11:59:28.216001 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 11:59:28.216009 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 11:59:28.216017 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:59:28.216027 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 11:59:28.216035 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 11:59:28.216043 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 11:59:28.216051 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 11:59:28.216059 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:59:28.216067 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:59:28.216075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:59:28.216084 kernel: landlock: Up and running. Jan 29 11:59:28.216092 kernel: SELinux: Initializing. Jan 29 11:59:28.216100 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.216108 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.216117 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 11:59:28.216127 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216135 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216144 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:59:28.216152 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 11:59:28.216160 kernel: signal: max sigframe size: 3632 Jan 29 11:59:28.216168 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:59:28.216177 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:59:28.216185 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:59:28.216193 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:59:28.216203 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:59:28.216212 kernel: .... node #0, CPUs: #1 Jan 29 11:59:28.216220 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 11:59:28.216229 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:59:28.216237 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:59:28.216245 kernel: smpboot: Max logical packages: 1 Jan 29 11:59:28.216253 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jan 29 11:59:28.216261 kernel: devtmpfs: initialized Jan 29 11:59:28.216272 kernel: x86/mm: Memory block size: 128MB Jan 29 11:59:28.216280 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 11:59:28.216288 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:59:28.216297 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:59:28.216305 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:59:28.216314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:59:28.216322 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:59:28.216330 kernel: audit: type=2000 audit(1738151966.028:1): state=initialized audit_enabled=0 res=1 Jan 29 11:59:28.216338 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:59:28.216348 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:59:28.216356 kernel: cpuidle: using governor menu Jan 29 11:59:28.216364 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:59:28.216373 kernel: dca service started, version 1.12.1 Jan 29 11:59:28.216381 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 11:59:28.216390 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:59:28.216398 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:59:28.216406 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:59:28.216414 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:59:28.216424 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:59:28.216432 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:59:28.216440 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:59:28.216448 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:59:28.216456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:59:28.216465 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:59:28.216473 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:59:28.216481 kernel: ACPI: Interpreter enabled Jan 29 11:59:28.216489 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:59:28.216500 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:59:28.216509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:59:28.216517 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 11:59:28.216525 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 11:59:28.216534 kernel: iommu: Default domain type: Translated Jan 29 11:59:28.216542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:59:28.216550 kernel: efivars: Registered efivars operations Jan 29 11:59:28.222316 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:59:28.222327 kernel: PCI: System does not support PCI Jan 29 11:59:28.222347 kernel: vgaarb: loaded Jan 29 11:59:28.222356 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 11:59:28.222364 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:59:28.222373 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:59:28.222381 kernel: pnp: PnP ACPI init Jan 29 11:59:28.222390 kernel: pnp: PnP ACPI: found 3 devices Jan 29 11:59:28.222398 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:59:28.222407 kernel: NET: Registered PF_INET protocol family Jan 29 11:59:28.222415 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:59:28.222426 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 11:59:28.222435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:59:28.222443 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:59:28.222451 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:59:28.222459 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 11:59:28.222468 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.222476 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:59:28.222484 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:59:28.222492 kernel: NET: Registered PF_XDP protocol family Jan 29 11:59:28.222503 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:59:28.222511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:59:28.222519 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 29 11:59:28.222527 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:59:28.222535 kernel: Initialise system trusted keyrings Jan 29 11:59:28.222544 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 11:59:28.222558 kernel: Key type asymmetric registered Jan 29 11:59:28.222567 kernel: Asymmetric key parser 'x509' registered Jan 29 11:59:28.222575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:59:28.222586 kernel: io scheduler mq-deadline registered Jan 29 11:59:28.222594 kernel: io scheduler kyber registered Jan 29 11:59:28.222602 kernel: io scheduler bfq registered Jan 29 11:59:28.222610 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:59:28.222618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:59:28.222627 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:59:28.222635 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:59:28.222643 kernel: i8042: PNP: No PS/2 controller found. Jan 29 11:59:28.222819 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 11:59:28.222910 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T11:59:27 UTC (1738151967) Jan 29 11:59:28.222986 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 11:59:28.222997 kernel: intel_pstate: CPU model not supported Jan 29 11:59:28.223005 kernel: efifb: probing for efifb Jan 29 11:59:28.223014 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 11:59:28.223022 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 11:59:28.223030 kernel: efifb: scrolling: redraw Jan 29 11:59:28.223041 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:59:28.223049 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:59:28.223057 kernel: fb0: EFI VGA frame buffer device Jan 29 11:59:28.223065 kernel: pstore: Using crash dump compression: deflate Jan 29 11:59:28.223074 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:59:28.223082 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:59:28.223090 kernel: Segment Routing with IPv6 Jan 29 11:59:28.223099 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:59:28.223107 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:59:28.223116 kernel: Key type dns_resolver registered Jan 29 11:59:28.223126 kernel: IPI shorthand broadcast: enabled Jan 29 11:59:28.223134 kernel: sched_clock: Marking stable (1081111300, 60917200)->(1435723800, -293695300) Jan 29 11:59:28.223142 kernel: registered taskstats version 1 Jan 29 11:59:28.223151 kernel: Loading compiled-in X.509 certificates Jan 29 11:59:28.223159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:59:28.223167 kernel: Key type .fscrypt registered Jan 29 11:59:28.223175 kernel: Key type fscrypt-provisioning registered Jan 29 11:59:28.223184 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:59:28.223194 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:59:28.223202 kernel: ima: No architecture policies found Jan 29 11:59:28.223210 kernel: clk: Disabling unused clocks Jan 29 11:59:28.223219 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:59:28.223227 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:59:28.223235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:59:28.223243 kernel: Run /init as init process Jan 29 11:59:28.223251 kernel: with arguments: Jan 29 11:59:28.223259 kernel: /init Jan 29 11:59:28.223270 kernel: with environment: Jan 29 11:59:28.223278 kernel: HOME=/ Jan 29 11:59:28.223286 kernel: TERM=linux Jan 29 11:59:28.223294 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:59:28.223304 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:59:28.223315 systemd[1]: Detected virtualization microsoft. Jan 29 11:59:28.223324 systemd[1]: Detected architecture x86-64. Jan 29 11:59:28.223333 systemd[1]: Running in initrd. Jan 29 11:59:28.223344 systemd[1]: No hostname configured, using default hostname. Jan 29 11:59:28.223352 systemd[1]: Hostname set to . Jan 29 11:59:28.223361 systemd[1]: Initializing machine ID from random generator. Jan 29 11:59:28.223369 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:59:28.223378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:28.223387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:28.223398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:59:28.223406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:59:28.223417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:59:28.223426 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:59:28.223436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:59:28.223444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:59:28.223453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:28.223462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:28.223470 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:59:28.223481 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:59:28.223490 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:59:28.223499 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:59:28.223507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:59:28.223516 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:59:28.223524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:59:28.223533 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:59:28.223542 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:28.223561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:28.223570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:28.223578 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:59:28.223587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:59:28.223596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:59:28.223604 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:59:28.223613 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:59:28.223621 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:59:28.223630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:59:28.223641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:28.223672 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 11:59:28.223694 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:59:28.223705 systemd-journald[176]: Journal started Jan 29 11:59:28.223729 systemd-journald[176]: Runtime Journal (/run/log/journal/52bac1527afb452c8acb0aa10b858077) is 8.0M, max 158.8M, 150.8M free. Jan 29 11:59:28.235589 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:59:28.236252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:28.237546 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:59:28.247999 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 11:59:28.251997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:59:28.260868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:59:28.285530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:59:28.293918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:28.306579 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:59:28.310504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:28.316938 kernel: Bridge firewalling registered Jan 29 11:59:28.310715 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 11:59:28.314020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:28.329764 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:28.334718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:28.342081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:59:28.361371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:28.362859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:28.373894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:59:28.379903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:28.385697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:59:28.409363 dracut-cmdline[213]: dracut-dracut-053 Jan 29 11:59:28.412871 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:59:28.438330 systemd-resolved[210]: Positive Trust Anchors: Jan 29 11:59:28.438351 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:59:28.438408 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:59:28.442225 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 29 11:59:28.443455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:59:28.463977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:28.500583 kernel: SCSI subsystem initialized Jan 29 11:59:28.510583 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:59:28.522586 kernel: iscsi: registered transport (tcp) Jan 29 11:59:28.544511 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:59:28.544627 kernel: QLogic iSCSI HBA Driver Jan 29 11:59:28.581597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:59:28.593749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:59:28.623918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:59:28.624031 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:59:28.630583 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:59:28.672946 kernel: raid6: avx512x4 gen() 18411 MB/s Jan 29 11:59:28.692590 kernel: raid6: avx512x2 gen() 15021 MB/s Jan 29 11:59:28.711567 kernel: raid6: avx512x1 gen() 18010 MB/s Jan 29 11:59:28.730567 kernel: raid6: avx2x4 gen() 18177 MB/s Jan 29 11:59:28.750916 kernel: raid6: avx2x2 gen() 18222 MB/s Jan 29 11:59:28.773150 kernel: raid6: avx2x1 gen() 9852 MB/s Jan 29 11:59:28.773264 kernel: raid6: using algorithm avx512x4 gen() 18411 MB/s Jan 29 11:59:28.794297 kernel: raid6: .... xor() 5971 MB/s, rmw enabled Jan 29 11:59:28.794396 kernel: raid6: using avx512x2 recovery algorithm Jan 29 11:59:28.817582 kernel: xor: automatically using best checksumming function avx Jan 29 11:59:29.000660 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:59:29.016688 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:59:29.031993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:29.052382 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 29 11:59:29.057481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:29.071789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:59:29.086092 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 29 11:59:29.114658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:59:29.123728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:59:29.168935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:29.185843 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:59:29.224678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:59:29.231530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:59:29.236036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:29.245098 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:59:29.256153 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:59:29.269118 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:59:29.290170 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:59:29.306235 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:59:29.306338 kernel: AES CTR mode by8 optimization enabled Jan 29 11:59:29.309202 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 11:59:29.313602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:59:29.316159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:29.323182 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:29.329359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:29.329451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.334737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.352586 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:59:29.352655 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:59:29.352772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.619361 kernel: PTP clock support registered Jan 29 11:59:29.619401 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 11:59:29.619431 kernel: hv_vmbus: registering driver hv_utils Jan 29 11:59:29.619448 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 11:59:29.619465 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 11:59:29.619482 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 11:59:29.619500 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 11:59:29.619516 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:59:29.619671 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 11:59:29.619695 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 11:59:29.619720 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 11:59:29.619737 kernel: scsi host1: storvsc_host_t Jan 29 11:59:29.619954 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 11:59:29.619971 kernel: scsi host0: storvsc_host_t Jan 29 11:59:29.620144 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 11:59:29.620335 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 11:59:29.620547 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 11:59:29.549471 systemd-resolved[210]: Clock change detected. Flushing caches. Jan 29 11:59:29.628492 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 11:59:29.633867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:29.639138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.658648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:29.687844 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 11:59:29.690385 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:59:29.690410 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 11:59:29.698694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:29.708993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:59:29.733404 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 11:59:29.756753 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:59:29.756914 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:59:29.757019 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 11:59:29.757120 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 11:59:29.757218 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:29.757231 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:59:29.747209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:29.778478 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: VF slot 1 added Jan 29 11:59:29.792595 kernel: hv_vmbus: registering driver hv_pci Jan 29 11:59:29.797456 kernel: hv_pci 67e09304-190a-43d8-916b-6f0d7b063831: PCI VMBus probing: Using version 0x10004 Jan 29 11:59:29.849991 kernel: hv_pci 67e09304-190a-43d8-916b-6f0d7b063831: PCI host bridge to bus 190a:00 Jan 29 11:59:29.850207 kernel: pci_bus 190a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 11:59:29.850579 kernel: pci_bus 190a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 11:59:29.850801 kernel: pci 190a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 11:59:29.851008 kernel: pci 190a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 11:59:29.851192 kernel: pci 190a:00:02.0: enabling Extended Tags Jan 29 11:59:29.851355 kernel: pci 190a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 190a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 11:59:29.852170 kernel: pci_bus 190a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 11:59:29.852346 kernel: pci 190a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 11:59:30.034791 kernel: mlx5_core 190a:00:02.0: enabling device (0000 -> 0002) Jan 29 11:59:30.312769 kernel: mlx5_core 190a:00:02.0: firmware version: 14.30.5000 Jan 29 11:59:30.313019 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Jan 29 11:59:30.313042 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (443) Jan 29 11:59:30.313063 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: VF registering: eth1 Jan 29 11:59:30.313247 kernel: mlx5_core 190a:00:02.0 eth1: joined to eth0 Jan 29 11:59:30.313462 kernel: mlx5_core 190a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:59:30.214461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 11:59:30.254759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 11:59:30.324646 kernel: mlx5_core 190a:00:02.0 enP6410s1: renamed from eth1 Jan 29 11:59:30.289695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:59:30.300000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 11:59:30.307176 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 11:59:30.320976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:59:30.353459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:30.363446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:31.379798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:31.383064 disk-uuid[605]: The operation has completed successfully. Jan 29 11:59:31.491254 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:59:31.491375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:59:31.514634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:59:31.520360 sh[718]: Success Jan 29 11:59:31.550854 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:59:31.753570 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:59:31.771600 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:59:31.788996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:59:31.817923 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:59:31.818015 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:31.821787 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:59:31.824638 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:59:31.827151 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:59:32.128644 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:59:32.134822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:59:32.147686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:59:32.156942 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:59:32.171544 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:32.171601 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:32.176367 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:32.193763 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:32.208525 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:32.208944 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:59:32.218125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:59:32.229699 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:59:32.256080 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:32.264791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:59:32.285092 systemd-networkd[902]: lo: Link UP Jan 29 11:59:32.285103 systemd-networkd[902]: lo: Gained carrier Jan 29 11:59:32.287376 systemd-networkd[902]: Enumeration completed Jan 29 11:59:32.287500 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:59:32.289666 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:32.289670 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:32.292062 systemd[1]: Reached target network.target - Network. Jan 29 11:59:32.370462 kernel: mlx5_core 190a:00:02.0 enP6410s1: Link up Jan 29 11:59:32.408824 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: Data path switched to VF: enP6410s1 Jan 29 11:59:32.410958 systemd-networkd[902]: enP6410s1: Link UP Jan 29 11:59:32.411077 systemd-networkd[902]: eth0: Link UP Jan 29 11:59:32.411233 systemd-networkd[902]: eth0: Gained carrier Jan 29 11:59:32.411247 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:32.430348 systemd-networkd[902]: enP6410s1: Gained carrier Jan 29 11:59:32.470099 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 11:59:32.920392 ignition[859]: Ignition 2.19.0 Jan 29 11:59:32.920405 ignition[859]: Stage: fetch-offline Jan 29 11:59:32.920487 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:32.920500 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:32.920632 ignition[859]: parsed url from cmdline: "" Jan 29 11:59:32.920638 ignition[859]: no config URL provided Jan 29 11:59:32.920644 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:32.920655 ignition[859]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:32.920663 ignition[859]: failed to fetch config: resource requires networking Jan 29 11:59:32.923687 ignition[859]: Ignition finished successfully Jan 29 11:59:32.946133 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:32.961817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:59:32.984753 ignition[910]: Ignition 2.19.0 Jan 29 11:59:32.984765 ignition[910]: Stage: fetch Jan 29 11:59:32.985029 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:32.985042 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:32.985665 ignition[910]: parsed url from cmdline: "" Jan 29 11:59:32.986405 ignition[910]: no config URL provided Jan 29 11:59:32.986869 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:32.987000 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:32.987300 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 11:59:33.083340 ignition[910]: GET result: OK Jan 29 11:59:33.083478 ignition[910]: config has been read from IMDS userdata Jan 29 11:59:33.088259 unknown[910]: fetched base config from "system" Jan 29 11:59:33.083514 ignition[910]: parsing config with SHA512: 868c433d67fad66123c4dc4ac69ca682accc0592e20f1dc903b81276b4793e996539dd57dcf8067a28e0319ab9b0389049abf518cfc1b89aedbe16dab35bdba7 Jan 29 11:59:33.088266 unknown[910]: fetched base config from "system" Jan 29 11:59:33.088663 ignition[910]: fetch: fetch complete Jan 29 11:59:33.088271 unknown[910]: fetched user config from "azure" Jan 29 11:59:33.088668 ignition[910]: fetch: fetch passed Jan 29 11:59:33.090253 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:59:33.088714 ignition[910]: Ignition finished successfully Jan 29 11:59:33.119639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:59:33.143097 ignition[916]: Ignition 2.19.0 Jan 29 11:59:33.143111 ignition[916]: Stage: kargs Jan 29 11:59:33.143366 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:33.147008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:59:33.143384 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:33.144869 ignition[916]: kargs: kargs passed Jan 29 11:59:33.144930 ignition[916]: Ignition finished successfully Jan 29 11:59:33.166644 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:59:33.184760 ignition[922]: Ignition 2.19.0 Jan 29 11:59:33.184774 ignition[922]: Stage: disks Jan 29 11:59:33.187995 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:59:33.185076 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:33.192578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:33.185092 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:33.199193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:59:33.186128 ignition[922]: disks: disks passed Jan 29 11:59:33.203857 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:59:33.186186 ignition[922]: Ignition finished successfully Jan 29 11:59:33.203907 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:33.203931 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:33.222755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:59:33.286921 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 11:59:33.300637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:59:33.313686 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:59:33.404466 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:59:33.405189 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:59:33.410406 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:59:33.444594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:33.456587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:59:33.466644 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:59:33.476525 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:59:33.476584 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:33.493463 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Jan 29 11:59:33.501272 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:33.501361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:33.504436 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:33.505359 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:59:33.512011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:59:33.520827 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:33.521265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:33.662640 systemd-networkd[902]: enP6410s1: Gained IPv6LL Jan 29 11:59:33.726675 systemd-networkd[902]: eth0: Gained IPv6LL Jan 29 11:59:34.106573 coreos-metadata[943]: Jan 29 11:59:34.106 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:59:34.117945 coreos-metadata[943]: Jan 29 11:59:34.117 INFO Fetch successful Jan 29 11:59:34.121549 coreos-metadata[943]: Jan 29 11:59:34.121 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:59:34.141165 coreos-metadata[943]: Jan 29 11:59:34.141 INFO Fetch successful Jan 29 11:59:34.154283 coreos-metadata[943]: Jan 29 11:59:34.154 INFO wrote hostname ci-4081.3.0-a-b5939ece28 to /sysroot/etc/hostname Jan 29 11:59:34.157038 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:34.217009 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:59:34.249494 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:59:34.269414 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:59:34.287413 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:59:35.083053 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:35.092582 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:59:35.099577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:59:35.110636 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:59:35.118143 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:35.153487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:59:35.158469 ignition[1058]: INFO : Ignition 2.19.0 Jan 29 11:59:35.158469 ignition[1058]: INFO : Stage: mount Jan 29 11:59:35.158469 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:35.158469 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:35.158469 ignition[1058]: INFO : mount: mount passed Jan 29 11:59:35.158469 ignition[1058]: INFO : Ignition finished successfully Jan 29 11:59:35.167130 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:59:35.175671 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:59:35.190919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:35.207443 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1070) Jan 29 11:59:35.211441 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:59:35.211502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:59:35.215792 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:35.223439 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:35.225854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:35.254352 ignition[1087]: INFO : Ignition 2.19.0 Jan 29 11:59:35.254352 ignition[1087]: INFO : Stage: files Jan 29 11:59:35.259449 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:35.259449 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:35.259449 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:59:35.279491 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:59:35.279491 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:59:35.365189 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:59:35.371886 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:59:35.371886 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:59:35.365777 unknown[1087]: wrote ssh authorized keys file for user: core Jan 29 11:59:35.409647 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:59:35.414740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:59:35.462864 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:59:35.587831 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:35.596384 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:59:36.027133 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:59:36.384648 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:59:36.384648 ignition[1087]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:59:36.398853 ignition[1087]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:36.413390 ignition[1087]: INFO : files: files passed Jan 29 11:59:36.413390 ignition[1087]: INFO : Ignition finished successfully Jan 29 11:59:36.404575 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:59:36.443648 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:59:36.472631 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:59:36.483572 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:59:36.483669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:59:36.533095 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.533095 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.545390 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:36.540870 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:36.556407 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:59:36.567106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:59:36.630601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:59:36.630861 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:59:36.636786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:59:36.639050 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:59:36.642261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:59:36.645678 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:59:36.663738 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:36.683621 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:59:36.697981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:36.703871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:36.709835 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:59:36.712948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:59:36.713114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:36.727266 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:59:36.734215 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:59:36.741565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:59:36.748758 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:36.758161 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:36.766128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:59:36.771482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:59:36.777498 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:59:36.782970 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:59:36.789635 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:59:36.800971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:59:36.801132 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:59:36.813842 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:36.821340 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:36.828257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:59:36.829012 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:36.840716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:59:36.842050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:59:36.853795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:59:36.856650 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:36.858006 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:59:36.858106 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:59:36.868836 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:59:36.875577 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:36.903947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:59:36.907410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:59:36.912742 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:36.928639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:59:36.931783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:59:36.932033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:36.939157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:59:36.939271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:59:36.954506 ignition[1139]: INFO : Ignition 2.19.0 Jan 29 11:59:36.954506 ignition[1139]: INFO : Stage: umount Jan 29 11:59:36.958841 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:36.958841 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:59:36.958841 ignition[1139]: INFO : umount: umount passed Jan 29 11:59:36.958841 ignition[1139]: INFO : Ignition finished successfully Jan 29 11:59:36.972836 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:59:36.972970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:59:36.988059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:59:36.990852 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:59:36.997765 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:59:36.998307 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:59:37.007527 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:59:37.007629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:59:37.015172 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:59:37.015258 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:59:37.022255 systemd[1]: Stopped target network.target - Network. Jan 29 11:59:37.026378 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:59:37.026498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:37.031733 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:59:37.033798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:59:37.039351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:37.042272 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:59:37.053340 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:59:37.056525 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:59:37.056580 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:59:37.064708 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:59:37.064782 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:59:37.070503 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:59:37.072303 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:59:37.079117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:59:37.079193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:59:37.084083 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:59:37.088978 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:59:37.098164 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:59:37.111479 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 29 11:59:37.114351 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:59:37.114505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:59:37.122601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:59:37.124895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:37.136563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:59:37.138943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:59:37.139024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:37.147770 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:37.151031 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:59:37.151174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:59:37.174315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:59:37.174505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:37.178226 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:59:37.178281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:37.185007 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:59:37.185173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:37.199195 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:59:37.199346 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:37.236878 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: Data path switched from VF: enP6410s1 Jan 29 11:59:37.216579 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:59:37.217134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:37.222842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:59:37.223635 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:37.232940 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:59:37.233019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:59:37.241173 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:59:37.241251 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:59:37.263717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:59:37.263841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:37.282757 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:59:37.287557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:59:37.287729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:37.291464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:37.291546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:37.305562 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:59:37.305716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:59:37.320266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:59:37.320396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:59:37.592968 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:59:37.593110 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:59:37.599295 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:59:37.605500 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:59:37.605599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:37.633704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:59:38.083701 systemd[1]: Switching root. Jan 29 11:59:38.122504 systemd-journald[176]: Journal stopped Jan 29 11:59:43.360179 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 29 11:59:43.360240 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:59:43.360260 kernel: SELinux: policy capability open_perms=1 Jan 29 11:59:43.360274 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:59:43.360288 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:59:43.360302 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:59:43.360320 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:59:43.360338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:59:43.360353 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:59:43.360368 kernel: audit: type=1403 audit(1738151979.709:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:59:43.360384 systemd[1]: Successfully loaded SELinux policy in 107.845ms. Jan 29 11:59:43.360401 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.013ms. Jan 29 11:59:43.360432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:59:43.360449 systemd[1]: Detected virtualization microsoft. Jan 29 11:59:43.360471 systemd[1]: Detected architecture x86-64. Jan 29 11:59:43.360487 systemd[1]: Detected first boot. Jan 29 11:59:43.360505 systemd[1]: Hostname set to . Jan 29 11:59:43.360526 systemd[1]: Initializing machine ID from random generator. Jan 29 11:59:43.360543 zram_generator::config[1199]: No configuration found. Jan 29 11:59:43.360563 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:59:43.360580 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:59:43.360596 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:59:43.360614 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:59:43.360631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:59:43.360648 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:59:43.360666 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:59:43.360686 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:59:43.360702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:59:43.360718 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:59:43.360734 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:59:43.360750 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:43.360766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:43.360782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:59:43.360800 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:59:43.360816 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:59:43.360832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:59:43.360848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:59:43.360863 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:43.360880 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:59:43.360897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:43.360917 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:59:43.360933 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:59:43.360952 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:59:43.360968 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:59:43.360984 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:59:43.361001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:59:43.361017 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:59:43.361033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:43.361049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:43.361068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:43.361084 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:59:43.361102 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:59:43.361118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:59:43.361134 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:59:43.361154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:43.361170 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:59:43.361186 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:59:43.361203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:59:43.361220 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:59:43.361239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:43.361255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:59:43.361271 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:59:43.361290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:43.361306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:59:43.361323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:43.361339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:59:43.361355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:43.361372 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:59:43.361389 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:59:43.361406 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:59:43.361441 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:59:43.361458 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:59:43.361474 kernel: loop: module loaded Jan 29 11:59:43.361490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:59:43.361506 kernel: fuse: init (API version 7.39) Jan 29 11:59:43.361522 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:59:43.361538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:59:43.361555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:43.361602 systemd-journald[1312]: Collecting audit messages is disabled. Jan 29 11:59:43.361643 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:59:43.361662 systemd-journald[1312]: Journal started Jan 29 11:59:43.361697 systemd-journald[1312]: Runtime Journal (/run/log/journal/fc9e311bb5c24792bc63ac50b3e772e5) is 8.0M, max 158.8M, 150.8M free. Jan 29 11:59:43.369444 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:59:43.370356 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:59:43.373587 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:59:43.377635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:59:43.380585 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:59:43.383534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:59:43.386394 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:59:43.395333 kernel: ACPI: bus type drm_connector registered Jan 29 11:59:43.389864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:43.395595 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:59:43.395911 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:59:43.399055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:43.399278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:43.406020 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:59:43.406246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:59:43.414060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:43.415243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:43.419231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:59:43.419618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:59:43.423289 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:43.423689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:43.429077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:43.432958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:59:43.439068 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:59:43.463911 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:59:43.473614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:59:43.488563 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:59:43.491749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:59:43.515689 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:59:43.521685 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:59:43.525931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:59:43.527960 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:59:43.530785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:59:43.537690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:43.543602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:59:43.549204 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:43.559928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:59:43.565455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:59:43.576981 systemd-journald[1312]: Time spent on flushing to /var/log/journal/fc9e311bb5c24792bc63ac50b3e772e5 is 19.450ms for 948 entries. Jan 29 11:59:43.576981 systemd-journald[1312]: System Journal (/var/log/journal/fc9e311bb5c24792bc63ac50b3e772e5) is 8.0M, max 2.6G, 2.6G free. Jan 29 11:59:43.626653 systemd-journald[1312]: Received client request to flush runtime journal. Jan 29 11:59:43.588271 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:59:43.598113 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:59:43.602382 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:59:43.609620 udevadm[1365]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:59:43.632183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:59:43.654388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:43.771636 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 29 11:59:43.772138 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 29 11:59:43.779629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:59:43.789747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:59:44.096814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:59:44.110634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:59:44.127456 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 11:59:44.127481 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 11:59:44.132906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:44.924946 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:59:44.934791 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:44.961151 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Jan 29 11:59:45.257920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:45.272444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:59:45.336765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:59:45.360934 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 11:59:45.467309 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:59:45.469437 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:59:45.496459 kernel: hv_vmbus: registering driver hv_balloon Jan 29 11:59:45.524491 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 11:59:45.529670 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 11:59:45.529776 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 11:59:45.537938 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 11:59:45.542082 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:59:45.553468 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:59:45.750763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:45.766733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:45.767080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:45.783623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:45.811852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:45.812180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:45.828368 systemd-networkd[1392]: lo: Link UP Jan 29 11:59:45.828380 systemd-networkd[1392]: lo: Gained carrier Jan 29 11:59:45.830958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:45.846626 systemd-networkd[1392]: Enumeration completed Jan 29 11:59:45.848047 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:45.851509 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:45.852963 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:59:45.856664 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 29 11:59:45.863651 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:59:45.882445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1404) Jan 29 11:59:45.923447 kernel: mlx5_core 190a:00:02.0 enP6410s1: Link up Jan 29 11:59:45.948325 systemd-networkd[1392]: enP6410s1: Link UP Jan 29 11:59:45.948559 kernel: hv_netvsc 000d3ab9-a49e-000d-3ab9-a49e000d3ab9 eth0: Data path switched to VF: enP6410s1 Jan 29 11:59:45.948530 systemd-networkd[1392]: eth0: Link UP Jan 29 11:59:45.948535 systemd-networkd[1392]: eth0: Gained carrier Jan 29 11:59:45.948566 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:45.956914 systemd-networkd[1392]: enP6410s1: Gained carrier Jan 29 11:59:45.982604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:59:45.998767 systemd-networkd[1392]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 11:59:46.040338 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:59:46.046926 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:59:46.119459 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:46.154082 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:59:46.154839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:46.172296 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:59:46.177680 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:46.212050 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:59:46.219379 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:59:46.225519 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:59:46.225565 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:59:46.227564 systemd[1]: Reached target machines.target - Containers. Jan 29 11:59:46.231388 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:59:46.246850 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:59:46.253607 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:59:46.259833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:46.263885 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:59:46.265443 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:59:46.278229 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:59:46.284844 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:59:46.321798 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:59:46.369964 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:59:46.371135 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:59:46.401450 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 11:59:46.698628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:46.767444 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:59:46.797447 kernel: loop1: detected capacity change from 0 to 210664 Jan 29 11:59:46.872450 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 11:59:47.102676 systemd-networkd[1392]: enP6410s1: Gained IPv6LL Jan 29 11:59:47.166660 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 29 11:59:47.172151 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:59:47.219453 kernel: loop3: detected capacity change from 0 to 31056 Jan 29 11:59:47.509443 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:59:47.521449 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 11:59:47.527447 kernel: loop6: detected capacity change from 0 to 142488 Jan 29 11:59:47.538443 kernel: loop7: detected capacity change from 0 to 31056 Jan 29 11:59:47.541989 (sd-merge)[1509]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 11:59:47.542617 (sd-merge)[1509]: Merged extensions into '/usr'. Jan 29 11:59:47.546398 systemd[1]: Reloading requested from client PID 1488 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:59:47.546474 systemd[1]: Reloading... Jan 29 11:59:47.605461 zram_generator::config[1535]: No configuration found. Jan 29 11:59:47.777140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:47.852510 systemd[1]: Reloading finished in 305 ms. Jan 29 11:59:47.868873 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:59:47.887642 systemd[1]: Starting ensure-sysext.service... Jan 29 11:59:47.896002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:59:47.908844 systemd[1]: Reloading requested from client PID 1600 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:59:47.908873 systemd[1]: Reloading... Jan 29 11:59:47.931032 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:59:47.931618 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:59:47.933542 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:59:47.934157 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 29 11:59:47.934339 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 29 11:59:47.951254 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:59:47.951490 systemd-tmpfiles[1601]: Skipping /boot Jan 29 11:59:47.981151 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:59:47.981172 systemd-tmpfiles[1601]: Skipping /boot Jan 29 11:59:47.988291 zram_generator::config[1627]: No configuration found. Jan 29 11:59:48.172578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:48.251995 systemd[1]: Reloading finished in 342 ms. Jan 29 11:59:48.278180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:48.292712 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:48.298131 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:59:48.304629 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:59:48.317625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:59:48.325981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:59:48.343179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:48.343504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:48.345768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:48.352490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:48.359301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:48.362603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:48.362830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:48.376785 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:59:48.387920 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:48.388125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:48.397566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:48.397783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:48.401348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:48.401692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:48.430252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:48.430719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:48.440691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:48.456326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:59:48.465697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:48.483074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:48.485306 systemd-resolved[1702]: Positive Trust Anchors: Jan 29 11:59:48.485782 systemd-resolved[1702]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:59:48.485832 systemd-resolved[1702]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:59:48.486215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:48.486311 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:59:48.495584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:59:48.500285 systemd[1]: Finished ensure-sysext.service. Jan 29 11:59:48.504501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:48.504775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:48.509874 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:59:48.510181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:59:48.513521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:48.513802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:48.518402 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:48.518667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:48.533535 systemd-resolved[1702]: Using system hostname 'ci-4081.3.0-a-b5939ece28'. Jan 29 11:59:48.534513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:59:48.534666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:59:48.536838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:59:48.539973 systemd[1]: Reached target network.target - Network. Jan 29 11:59:48.542218 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:59:48.545474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:48.576778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:59:48.585488 augenrules[1745]: No rules Jan 29 11:59:48.587507 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:49.332881 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:59:49.337754 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:59:52.091506 ldconfig[1485]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:59:52.117847 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:59:52.132685 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:59:52.170320 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:59:52.176720 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:52.182972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:59:52.187861 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:59:52.192050 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:59:52.195797 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:59:52.198849 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:59:52.202007 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:59:52.202072 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:59:52.204453 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:59:52.207886 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:59:52.213052 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:59:52.230306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:59:52.234113 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:59:52.236852 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:59:52.239232 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:52.243825 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:59:52.243928 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:52.243974 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:52.263587 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 11:59:52.277548 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:59:52.285750 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:59:52.304646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:59:52.310544 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:59:52.317118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:59:52.320809 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:59:52.320885 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 11:59:52.330636 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 11:59:52.339005 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 11:59:52.348835 (chronyd)[1763]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 11:59:52.350863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:52.353971 jq[1768]: false Jan 29 11:59:52.359673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:59:52.372738 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:59:52.378757 chronyd[1780]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 11:59:52.381884 KVP[1772]: KVP starting; pid is:1772 Jan 29 11:59:52.383361 chronyd[1780]: Timezone right/UTC failed leap second check, ignoring Jan 29 11:59:52.383626 chronyd[1780]: Loaded seccomp filter (level 2) Jan 29 11:59:52.389164 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:59:52.399662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:59:52.408638 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:59:52.413303 kernel: hv_utils: KVP IC version 4.0 Jan 29 11:59:52.412591 KVP[1772]: KVP LIC Version: 3.1 Jan 29 11:59:52.432774 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:59:52.444460 extend-filesystems[1771]: Found loop4 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found loop5 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found loop6 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found loop7 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda1 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda2 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda3 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found usr Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda4 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda6 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda7 Jan 29 11:59:52.444460 extend-filesystems[1771]: Found sda9 Jan 29 11:59:52.444460 extend-filesystems[1771]: Checking size of /dev/sda9 Jan 29 11:59:52.446841 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:59:52.457969 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:59:52.488902 extend-filesystems[1771]: Old size kept for /dev/sda9 Jan 29 11:59:52.488902 extend-filesystems[1771]: Found sr0 Jan 29 11:59:52.496933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:59:52.512562 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 11:59:52.519121 jq[1808]: true Jan 29 11:59:52.523971 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:59:52.524325 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:59:52.526544 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:59:52.526880 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:59:52.543826 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:59:52.544163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:59:52.553117 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:59:52.565996 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:59:52.566343 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:59:52.611414 dbus-daemon[1767]: [system] SELinux support is enabled Jan 29 11:59:52.624796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:59:52.628912 (ntainerd)[1818]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:59:52.637961 jq[1817]: true Jan 29 11:59:52.664951 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:59:52.664999 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:59:52.671450 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:59:52.671493 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:59:52.672186 update_engine[1798]: I20250129 11:59:52.671742 1798 main.cc:92] Flatcar Update Engine starting Jan 29 11:59:52.693477 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:59:52.695227 tar[1816]: linux-amd64/helm Jan 29 11:59:52.699170 update_engine[1798]: I20250129 11:59:52.697672 1798 update_check_scheduler.cc:74] Next update check in 10m56s Jan 29 11:59:52.698161 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:59:52.706598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:59:52.710951 systemd-logind[1789]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:59:52.715886 systemd-logind[1789]: New seat seat0. Jan 29 11:59:52.726038 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:59:52.834292 coreos-metadata[1766]: Jan 29 11:59:52.834 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:59:52.841501 bash[1857]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:59:52.843185 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:59:52.850778 coreos-metadata[1766]: Jan 29 11:59:52.844 INFO Fetch successful Jan 29 11:59:52.850778 coreos-metadata[1766]: Jan 29 11:59:52.844 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 11:59:52.852377 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:59:52.857524 coreos-metadata[1766]: Jan 29 11:59:52.855 INFO Fetch successful Jan 29 11:59:52.860534 coreos-metadata[1766]: Jan 29 11:59:52.859 INFO Fetching http://168.63.129.16/machine/c7349dd3-3ed4-40b7-8ac0-7981a1f15eac/7bdbe037%2Db370%2D4304%2Da866%2D4494bd73f467.%5Fci%2D4081.3.0%2Da%2Db5939ece28?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 11:59:52.862635 coreos-metadata[1766]: Jan 29 11:59:52.862 INFO Fetch successful Jan 29 11:59:52.867280 coreos-metadata[1766]: Jan 29 11:59:52.864 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:59:52.881007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1859) Jan 29 11:59:52.881447 coreos-metadata[1766]: Jan 29 11:59:52.881 INFO Fetch successful Jan 29 11:59:53.018081 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:59:53.021847 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:59:53.259830 locksmithd[1841]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:59:53.290694 sshd_keygen[1804]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:59:53.346826 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:59:53.368778 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:59:53.385689 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 11:59:53.421207 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:59:53.422487 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:59:53.448202 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 11:59:53.459623 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:59:53.491776 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:59:53.503021 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:59:53.518322 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:59:53.522379 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:59:53.697598 tar[1816]: linux-amd64/LICENSE Jan 29 11:59:53.698948 tar[1816]: linux-amd64/README.md Jan 29 11:59:53.722330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:59:53.847078 containerd[1818]: time="2025-01-29T11:59:53.846603000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:59:53.882662 containerd[1818]: time="2025-01-29T11:59:53.882608700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.884734 containerd[1818]: time="2025-01-29T11:59:53.884672100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:53.884734 containerd[1818]: time="2025-01-29T11:59:53.884714800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:59:53.884734 containerd[1818]: time="2025-01-29T11:59:53.884739100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:59:53.884958 containerd[1818]: time="2025-01-29T11:59:53.884924800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:59:53.884958 containerd[1818]: time="2025-01-29T11:59:53.884950900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885030900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885051100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885345400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885368000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885387400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885402200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885511100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885749300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885958800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.885978700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:59:53.886139 containerd[1818]: time="2025-01-29T11:59:53.886077900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:59:53.886535 containerd[1818]: time="2025-01-29T11:59:53.886160300Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:59:53.916883 containerd[1818]: time="2025-01-29T11:59:53.916822600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:59:53.917070 containerd[1818]: time="2025-01-29T11:59:53.916949900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:59:53.917070 containerd[1818]: time="2025-01-29T11:59:53.917029900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:59:53.917070 containerd[1818]: time="2025-01-29T11:59:53.917052500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:59:53.917178 containerd[1818]: time="2025-01-29T11:59:53.917072100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:59:53.917326 containerd[1818]: time="2025-01-29T11:59:53.917289500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.917814700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918014200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918042100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918067200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918088100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918107500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918126300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918147200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918167400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918185300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918203300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918240 containerd[1818]: time="2025-01-29T11:59:53.918221400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918255300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918276200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918292700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918327800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918350400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918370800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918388900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918407000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918444500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918467000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918484400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918502700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918523300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918549800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:59:53.918706 containerd[1818]: time="2025-01-29T11:59:53.918582500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918601300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918617100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918672400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918697800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918712700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918731700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918745900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918769500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918798400Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:59:53.919192 containerd[1818]: time="2025-01-29T11:59:53.918825000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.919225500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.919306900Z" level=info msg="Connect containerd service" Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.919362700Z" level=info msg="using legacy CRI server" Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.919372200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.919557900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:59:53.920376 containerd[1818]: time="2025-01-29T11:59:53.920327600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:59:53.920786 containerd[1818]: time="2025-01-29T11:59:53.920736000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:59:53.920824 containerd[1818]: time="2025-01-29T11:59:53.920794900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.920901100Z" level=info msg="Start subscribing containerd event" Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.920963500Z" level=info msg="Start recovering state" Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.921034300Z" level=info msg="Start event monitor" Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.921047500Z" level=info msg="Start snapshots syncer" Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.921059300Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:59:53.921594 containerd[1818]: time="2025-01-29T11:59:53.921068800Z" level=info msg="Start streaming server" Jan 29 11:59:53.921306 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:59:53.927139 containerd[1818]: time="2025-01-29T11:59:53.927097100Z" level=info msg="containerd successfully booted in 0.082140s" Jan 29 11:59:54.312684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:54.316868 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:59:54.321992 systemd[1]: Startup finished in 880ms (firmware) + 25.013s (loader) + 12.912s (kernel) + 14.719s (userspace) = 53.526s. Jan 29 11:59:54.325391 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:54.661308 login[1930]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:59:54.664681 login[1931]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:59:54.684880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:59:54.697889 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:59:54.708518 systemd-logind[1789]: New session 1 of user core. Jan 29 11:59:54.717429 systemd-logind[1789]: New session 2 of user core. Jan 29 11:59:54.737728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:59:54.757295 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:59:54.763378 (systemd)[1966]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:59:55.137589 systemd[1966]: Queued start job for default target default.target. Jan 29 11:59:55.138177 systemd[1966]: Created slice app.slice - User Application Slice. Jan 29 11:59:55.138229 systemd[1966]: Reached target paths.target - Paths. Jan 29 11:59:55.138253 systemd[1966]: Reached target timers.target - Timers. Jan 29 11:59:55.145707 systemd[1966]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:59:55.160455 systemd[1966]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:59:55.161148 systemd[1966]: Reached target sockets.target - Sockets. Jan 29 11:59:55.161179 systemd[1966]: Reached target basic.target - Basic System. Jan 29 11:59:55.161244 systemd[1966]: Reached target default.target - Main User Target. Jan 29 11:59:55.161281 systemd[1966]: Startup finished in 387ms. Jan 29 11:59:55.163081 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:59:55.174572 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:59:55.175680 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:59:55.297404 kubelet[1953]: E0129 11:59:55.297338 1953 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:55.300507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:55.300883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:58.114364 waagent[1923]: 2025-01-29T11:59:58.114238Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.114744Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.115621Z INFO Daemon Daemon Python: 3.11.9 Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.116583Z INFO Daemon Daemon Run daemon Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.117316Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.118015Z INFO Daemon Daemon Using waagent for provisioning Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.119199Z INFO Daemon Daemon Activate resource disk Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.119883Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.124365Z INFO Daemon Daemon Found device: None Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.125318Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.126135Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.128872Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 11:59:58.137586 waagent[1923]: 2025-01-29T11:59:58.129089Z INFO Daemon Daemon Running default provisioning handler Jan 29 11:59:58.152790 waagent[1923]: 2025-01-29T11:59:58.152677Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 11:59:58.160476 waagent[1923]: 2025-01-29T11:59:58.160308Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 11:59:58.169488 waagent[1923]: 2025-01-29T11:59:58.161208Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 11:59:58.169488 waagent[1923]: 2025-01-29T11:59:58.164253Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 11:59:58.255471 waagent[1923]: 2025-01-29T11:59:58.252649Z INFO Daemon Daemon Successfully mounted dvd Jan 29 11:59:58.270223 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 11:59:58.272948 waagent[1923]: 2025-01-29T11:59:58.272864Z INFO Daemon Daemon Detect protocol endpoint Jan 29 11:59:58.286360 waagent[1923]: 2025-01-29T11:59:58.273318Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 11:59:58.286360 waagent[1923]: 2025-01-29T11:59:58.274239Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 11:59:58.286360 waagent[1923]: 2025-01-29T11:59:58.275002Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 11:59:58.286360 waagent[1923]: 2025-01-29T11:59:58.275948Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 11:59:58.286360 waagent[1923]: 2025-01-29T11:59:58.276594Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 11:59:58.313401 waagent[1923]: 2025-01-29T11:59:58.313324Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 11:59:58.320641 waagent[1923]: 2025-01-29T11:59:58.313913Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 11:59:58.320641 waagent[1923]: 2025-01-29T11:59:58.315049Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 11:59:58.378577 waagent[1923]: 2025-01-29T11:59:58.378371Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 11:59:58.387638 waagent[1923]: 2025-01-29T11:59:58.378870Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 11:59:58.387638 waagent[1923]: 2025-01-29T11:59:58.383290Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 11:59:58.398786 waagent[1923]: 2025-01-29T11:59:58.398718Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.399548Z INFO Daemon Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.400299Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c5b83811-06fc-4b74-91af-034e35372b0b eTag: 12082570890623262465 source: Fabric] Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.401356Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.402736Z INFO Daemon Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.403120Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 11:59:58.415229 waagent[1923]: 2025-01-29T11:59:58.408717Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 11:59:58.482685 waagent[1923]: 2025-01-29T11:59:58.482578Z INFO Daemon Downloaded certificate {'thumbprint': 'C4B6716C6002EA410F7FD8FF6232F39BE739CFE8', 'hasPrivateKey': False} Jan 29 11:59:58.497568 waagent[1923]: 2025-01-29T11:59:58.487012Z INFO Daemon Downloaded certificate {'thumbprint': '22F49445EFAFB78354FB6D6178B8A59EBE2D81B1', 'hasPrivateKey': True} Jan 29 11:59:58.497568 waagent[1923]: 2025-01-29T11:59:58.492387Z INFO Daemon Fetch goal state completed Jan 29 11:59:58.500356 waagent[1923]: 2025-01-29T11:59:58.500286Z INFO Daemon Daemon Starting provisioning Jan 29 11:59:58.507449 waagent[1923]: 2025-01-29T11:59:58.500634Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 11:59:58.507449 waagent[1923]: 2025-01-29T11:59:58.502334Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-b5939ece28] Jan 29 11:59:58.526694 waagent[1923]: 2025-01-29T11:59:58.526602Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-b5939ece28] Jan 29 11:59:58.533860 waagent[1923]: 2025-01-29T11:59:58.527182Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 11:59:58.533860 waagent[1923]: 2025-01-29T11:59:58.527990Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 11:59:58.551861 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:58.551872 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:58.551935 systemd-networkd[1392]: eth0: DHCP lease lost Jan 29 11:59:58.553502 waagent[1923]: 2025-01-29T11:59:58.553353Z INFO Daemon Daemon Create user account if not exists Jan 29 11:59:58.568344 waagent[1923]: 2025-01-29T11:59:58.553937Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 11:59:58.568344 waagent[1923]: 2025-01-29T11:59:58.555162Z INFO Daemon Daemon Configure sudoer Jan 29 11:59:58.568344 waagent[1923]: 2025-01-29T11:59:58.556260Z INFO Daemon Daemon Configure sshd Jan 29 11:59:58.568344 waagent[1923]: 2025-01-29T11:59:58.557393Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 11:59:58.568344 waagent[1923]: 2025-01-29T11:59:58.557959Z INFO Daemon Daemon Deploy ssh public key. Jan 29 11:59:58.570539 systemd-networkd[1392]: eth0: DHCPv6 lease lost Jan 29 11:59:58.612540 systemd-networkd[1392]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 11:59:59.666316 waagent[1923]: 2025-01-29T11:59:59.666224Z INFO Daemon Daemon Provisioning complete Jan 29 11:59:59.681446 waagent[1923]: 2025-01-29T11:59:59.681358Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 11:59:59.688382 waagent[1923]: 2025-01-29T11:59:59.681887Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 11:59:59.688382 waagent[1923]: 2025-01-29T11:59:59.682925Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 11:59:59.827462 waagent[2028]: 2025-01-29T11:59:59.827331Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 11:59:59.828044 waagent[2028]: 2025-01-29T11:59:59.827575Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 29 11:59:59.828044 waagent[2028]: 2025-01-29T11:59:59.827672Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 29 11:59:59.870397 waagent[2028]: 2025-01-29T11:59:59.870265Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 11:59:59.870772 waagent[2028]: 2025-01-29T11:59:59.870690Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 11:59:59.870878 waagent[2028]: 2025-01-29T11:59:59.870838Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 11:59:59.890610 waagent[2028]: 2025-01-29T11:59:59.890476Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 11:59:59.897154 waagent[2028]: 2025-01-29T11:59:59.897061Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 11:59:59.897781 waagent[2028]: 2025-01-29T11:59:59.897718Z INFO ExtHandler Jan 29 11:59:59.897886 waagent[2028]: 2025-01-29T11:59:59.897823Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 058a7cfa-0366-48a8-9d20-01e96a35ac1b eTag: 12082570890623262465 source: Fabric] Jan 29 11:59:59.898228 waagent[2028]: 2025-01-29T11:59:59.898172Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 11:59:59.898847 waagent[2028]: 2025-01-29T11:59:59.898784Z INFO ExtHandler Jan 29 11:59:59.898944 waagent[2028]: 2025-01-29T11:59:59.898877Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 11:59:59.903022 waagent[2028]: 2025-01-29T11:59:59.902974Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 11:59:59.979073 waagent[2028]: 2025-01-29T11:59:59.978900Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C4B6716C6002EA410F7FD8FF6232F39BE739CFE8', 'hasPrivateKey': False} Jan 29 11:59:59.979580 waagent[2028]: 2025-01-29T11:59:59.979510Z INFO ExtHandler Downloaded certificate {'thumbprint': '22F49445EFAFB78354FB6D6178B8A59EBE2D81B1', 'hasPrivateKey': True} Jan 29 11:59:59.980084 waagent[2028]: 2025-01-29T11:59:59.980034Z INFO ExtHandler Fetch goal state completed Jan 29 11:59:59.998221 waagent[2028]: 2025-01-29T11:59:59.998130Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2028 Jan 29 11:59:59.998455 waagent[2028]: 2025-01-29T11:59:59.998377Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 12:00:00.000402 waagent[2028]: 2025-01-29T12:00:00.000326Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 12:00:00.000841 waagent[2028]: 2025-01-29T12:00:00.000783Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 12:00:00.046844 waagent[2028]: 2025-01-29T12:00:00.046789Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 12:00:00.047134 waagent[2028]: 2025-01-29T12:00:00.047076Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 12:00:00.056503 waagent[2028]: 2025-01-29T12:00:00.056454Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 12:00:00.067366 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit waagent.service)... Jan 29 12:00:00.067387 systemd[1]: Reloading... Jan 29 12:00:00.171519 zram_generator::config[2077]: No configuration found. Jan 29 12:00:00.302760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:00.380521 systemd[1]: Reloading finished in 312 ms. Jan 29 12:00:00.404707 waagent[2028]: 2025-01-29T12:00:00.404124Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 12:00:00.412577 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit waagent.service)... Jan 29 12:00:00.412597 systemd[1]: Reloading... Jan 29 12:00:00.511472 zram_generator::config[2173]: No configuration found. Jan 29 12:00:00.647799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:00.728321 systemd[1]: Reloading finished in 315 ms. Jan 29 12:00:00.761046 waagent[2028]: 2025-01-29T12:00:00.760611Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 12:00:00.761046 waagent[2028]: 2025-01-29T12:00:00.760877Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 12:00:01.160096 waagent[2028]: 2025-01-29T12:00:01.159988Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 12:00:01.160894 waagent[2028]: 2025-01-29T12:00:01.160821Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 12:00:01.161794 waagent[2028]: 2025-01-29T12:00:01.161726Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 12:00:01.161971 waagent[2028]: 2025-01-29T12:00:01.161885Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:00:01.162473 waagent[2028]: 2025-01-29T12:00:01.162383Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:00:01.162636 waagent[2028]: 2025-01-29T12:00:01.162524Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 12:00:01.162757 waagent[2028]: 2025-01-29T12:00:01.162651Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:00:01.162802 waagent[2028]: 2025-01-29T12:00:01.162748Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:00:01.162975 waagent[2028]: 2025-01-29T12:00:01.162919Z INFO EnvHandler ExtHandler Configure routes Jan 29 12:00:01.163132 waagent[2028]: 2025-01-29T12:00:01.163087Z INFO EnvHandler ExtHandler Gateway:None Jan 29 12:00:01.163453 waagent[2028]: 2025-01-29T12:00:01.163377Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 12:00:01.163580 waagent[2028]: 2025-01-29T12:00:01.163520Z INFO EnvHandler ExtHandler Routes:None Jan 29 12:00:01.163864 waagent[2028]: 2025-01-29T12:00:01.163797Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 12:00:01.163948 waagent[2028]: 2025-01-29T12:00:01.163866Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 12:00:01.165145 waagent[2028]: 2025-01-29T12:00:01.165063Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 12:00:01.165470 waagent[2028]: 2025-01-29T12:00:01.165400Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 12:00:01.165978 waagent[2028]: 2025-01-29T12:00:01.165936Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 12:00:01.166550 waagent[2028]: 2025-01-29T12:00:01.166432Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 12:00:01.166550 waagent[2028]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 12:00:01.166550 waagent[2028]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 12:00:01.166550 waagent[2028]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 12:00:01.166550 waagent[2028]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:00:01.166550 waagent[2028]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:00:01.166550 waagent[2028]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:00:01.177445 waagent[2028]: 2025-01-29T12:00:01.175709Z INFO ExtHandler ExtHandler Jan 29 12:00:01.177445 waagent[2028]: 2025-01-29T12:00:01.175812Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8c272fb8-8f22-4ba9-ab81-a5659c252a1c correlation 85a6b8a9-9e6f-44fb-af20-61e3bbb48425 created: 2025-01-29T11:58:49.782761Z] Jan 29 12:00:01.177445 waagent[2028]: 2025-01-29T12:00:01.176247Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 12:00:01.177445 waagent[2028]: 2025-01-29T12:00:01.177151Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 29 12:00:01.220435 waagent[2028]: 2025-01-29T12:00:01.220326Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9884D148-2559-4D33-A057-14879CACBDF7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 12:00:01.230521 waagent[2028]: 2025-01-29T12:00:01.230396Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 12:00:01.230521 waagent[2028]: Executing ['ip', '-a', '-o', 'link']: Jan 29 12:00:01.230521 waagent[2028]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 12:00:01.230521 waagent[2028]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:a4:9e brd ff:ff:ff:ff:ff:ff Jan 29 12:00:01.230521 waagent[2028]: 3: enP6410s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:a4:9e brd ff:ff:ff:ff:ff:ff\ altname enP6410p0s2 Jan 29 12:00:01.230521 waagent[2028]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 12:00:01.230521 waagent[2028]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 12:00:01.230521 waagent[2028]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 12:00:01.230521 waagent[2028]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 12:00:01.230521 waagent[2028]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 12:00:01.230521 waagent[2028]: 2: eth0 inet6 fe80::20d:3aff:feb9:a49e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:00:01.230521 waagent[2028]: 3: enP6410s1 inet6 fe80::20d:3aff:feb9:a49e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:00:01.330207 waagent[2028]: 2025-01-29T12:00:01.330115Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 12:00:01.330207 waagent[2028]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.330207 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.330207 waagent[2028]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.330207 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.330207 waagent[2028]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.330207 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.330207 waagent[2028]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:00:01.330207 waagent[2028]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:00:01.330207 waagent[2028]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:00:01.334108 waagent[2028]: 2025-01-29T12:00:01.334030Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 12:00:01.334108 waagent[2028]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.334108 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.334108 waagent[2028]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.334108 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.334108 waagent[2028]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:00:01.334108 waagent[2028]: pkts bytes target prot opt in out source destination Jan 29 12:00:01.334108 waagent[2028]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:00:01.334108 waagent[2028]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:00:01.334108 waagent[2028]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:00:01.334580 waagent[2028]: 2025-01-29T12:00:01.334412Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 12:00:05.396986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:00:05.402726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:05.538643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:05.540607 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:06.067271 kubelet[2279]: E0129 12:00:06.067208 2279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:06.071697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:06.072087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:16.146985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:00:16.152742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:16.210577 chronyd[1780]: Selected source PHC0 Jan 29 12:00:16.273660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:16.290032 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:16.869268 kubelet[2300]: E0129 12:00:16.869206 2300 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:16.873199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:16.873503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:26.896974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:00:26.909690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:27.014610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:27.019610 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:27.599206 kubelet[2321]: E0129 12:00:27.599122 2321 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:27.601756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:27.602054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:28.134693 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:00:28.146737 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:35382.service - OpenSSH per-connection server daemon (10.200.16.10:35382). Jan 29 12:00:28.861984 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 35382 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:28.863668 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:28.869940 systemd-logind[1789]: New session 3 of user core. Jan 29 12:00:28.879886 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:00:29.431885 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:35394.service - OpenSSH per-connection server daemon (10.200.16.10:35394). Jan 29 12:00:30.073527 sshd[2335]: Accepted publickey for core from 10.200.16.10 port 35394 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:30.075464 sshd[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:30.081195 systemd-logind[1789]: New session 4 of user core. Jan 29 12:00:30.086832 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:00:30.537271 sshd[2335]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:30.542094 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:35394.service: Deactivated successfully. Jan 29 12:00:30.547322 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:00:30.548585 systemd-logind[1789]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:00:30.549540 systemd-logind[1789]: Removed session 4. Jan 29 12:00:30.651211 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:35400.service - OpenSSH per-connection server daemon (10.200.16.10:35400). Jan 29 12:00:31.295604 sshd[2343]: Accepted publickey for core from 10.200.16.10 port 35400 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:31.297191 sshd[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:31.303749 systemd-logind[1789]: New session 5 of user core. Jan 29 12:00:31.313105 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:00:31.755635 sshd[2343]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:31.758918 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:35400.service: Deactivated successfully. Jan 29 12:00:31.764445 systemd-logind[1789]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:00:31.765508 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:00:31.766505 systemd-logind[1789]: Removed session 5. Jan 29 12:00:31.875136 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:35404.service - OpenSSH per-connection server daemon (10.200.16.10:35404). Jan 29 12:00:32.522351 sshd[2351]: Accepted publickey for core from 10.200.16.10 port 35404 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:32.524003 sshd[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:32.528488 systemd-logind[1789]: New session 6 of user core. Jan 29 12:00:32.540043 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:00:32.985469 sshd[2351]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:32.990793 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:35404.service: Deactivated successfully. Jan 29 12:00:32.996856 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:00:32.998047 systemd-logind[1789]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:00:32.999121 systemd-logind[1789]: Removed session 6. Jan 29 12:00:33.107228 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:35414.service - OpenSSH per-connection server daemon (10.200.16.10:35414). Jan 29 12:00:33.682273 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 29 12:00:33.750680 sshd[2359]: Accepted publickey for core from 10.200.16.10 port 35414 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:33.752334 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:33.758380 systemd-logind[1789]: New session 7 of user core. Jan 29 12:00:33.764074 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:00:34.240752 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:00:34.241189 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:34.274457 sudo[2363]: pam_unix(sudo:session): session closed for user root Jan 29 12:00:34.383322 sshd[2359]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:34.388928 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:35414.service: Deactivated successfully. Jan 29 12:00:34.394081 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:00:34.394861 systemd-logind[1789]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:00:34.396153 systemd-logind[1789]: Removed session 7. Jan 29 12:00:34.497130 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:35426.service - OpenSSH per-connection server daemon (10.200.16.10:35426). Jan 29 12:00:35.139270 sshd[2368]: Accepted publickey for core from 10.200.16.10 port 35426 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:35.141117 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:35.145489 systemd-logind[1789]: New session 8 of user core. Jan 29 12:00:35.152074 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:00:35.500727 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:00:35.501110 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:35.507043 sudo[2373]: pam_unix(sudo:session): session closed for user root Jan 29 12:00:35.517659 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:00:35.518310 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:35.540832 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:00:35.543626 auditctl[2376]: No rules Jan 29 12:00:35.544324 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:00:35.544974 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:00:35.553384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:00:35.585767 augenrules[2395]: No rules Jan 29 12:00:35.588156 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:00:35.592803 sudo[2372]: pam_unix(sudo:session): session closed for user root Jan 29 12:00:35.699861 sshd[2368]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:35.703273 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:35426.service: Deactivated successfully. Jan 29 12:00:35.708655 systemd-logind[1789]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:00:35.709576 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:00:35.710632 systemd-logind[1789]: Removed session 8. Jan 29 12:00:35.812091 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:35442.service - OpenSSH per-connection server daemon (10.200.16.10:35442). Jan 29 12:00:36.463937 sshd[2404]: Accepted publickey for core from 10.200.16.10 port 35442 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:00:36.465584 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:36.470875 systemd-logind[1789]: New session 9 of user core. Jan 29 12:00:36.478799 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:00:36.820163 sudo[2408]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:00:36.820576 sudo[2408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:37.646748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:00:37.656739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:37.691557 update_engine[1798]: I20250129 12:00:37.690475 1798 update_attempter.cc:509] Updating boot flags... Jan 29 12:00:37.828802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:37.834243 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:37.884792 kubelet[2438]: E0129 12:00:37.884735 2438 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:37.887814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:37.888131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:38.357450 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2456) Jan 29 12:00:38.494647 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2459) Jan 29 12:00:38.648002 (dockerd)[2511]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:00:38.648051 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:00:39.756260 dockerd[2511]: time="2025-01-29T12:00:39.756178069Z" level=info msg="Starting up" Jan 29 12:00:40.312174 dockerd[2511]: time="2025-01-29T12:00:40.312114398Z" level=info msg="Loading containers: start." Jan 29 12:00:40.461445 kernel: Initializing XFRM netlink socket Jan 29 12:00:40.636917 systemd-networkd[1392]: docker0: Link UP Jan 29 12:00:40.661715 dockerd[2511]: time="2025-01-29T12:00:40.661668003Z" level=info msg="Loading containers: done." Jan 29 12:00:40.711273 dockerd[2511]: time="2025-01-29T12:00:40.711214925Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:00:40.711583 dockerd[2511]: time="2025-01-29T12:00:40.711367028Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:00:40.711583 dockerd[2511]: time="2025-01-29T12:00:40.711546632Z" level=info msg="Daemon has completed initialization" Jan 29 12:00:40.770371 dockerd[2511]: time="2025-01-29T12:00:40.770203441Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:00:40.771433 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:00:42.757432 containerd[1818]: time="2025-01-29T12:00:42.757379606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:00:43.335397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121456718.mount: Deactivated successfully. Jan 29 12:00:45.292485 containerd[1818]: time="2025-01-29T12:00:45.292407065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.296599 containerd[1818]: time="2025-01-29T12:00:45.296517849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 12:00:45.307407 containerd[1818]: time="2025-01-29T12:00:45.307318972Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.313561 containerd[1818]: time="2025-01-29T12:00:45.312271874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.313561 containerd[1818]: time="2025-01-29T12:00:45.313288195Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.555840888s" Jan 29 12:00:45.313561 containerd[1818]: time="2025-01-29T12:00:45.313330896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:00:45.335763 containerd[1818]: time="2025-01-29T12:00:45.335714057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:00:47.213147 containerd[1818]: time="2025-01-29T12:00:47.213072659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:47.215314 containerd[1818]: time="2025-01-29T12:00:47.215222503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 12:00:47.218887 containerd[1818]: time="2025-01-29T12:00:47.218811477Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:47.224263 containerd[1818]: time="2025-01-29T12:00:47.224164587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:47.226037 containerd[1818]: time="2025-01-29T12:00:47.225361812Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.889599854s" Jan 29 12:00:47.226037 containerd[1818]: time="2025-01-29T12:00:47.225433113Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:00:47.248687 containerd[1818]: time="2025-01-29T12:00:47.248637992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:00:47.897261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:00:47.906715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:48.035755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:48.048949 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:48.728884 kubelet[2729]: E0129 12:00:48.665700 2729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:48.668966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:48.669284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:49.210827 containerd[1818]: time="2025-01-29T12:00:49.210747148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:49.213122 containerd[1818]: time="2025-01-29T12:00:49.213030403Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 12:00:49.219873 containerd[1818]: time="2025-01-29T12:00:49.219780166Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:49.228151 containerd[1818]: time="2025-01-29T12:00:49.227895062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:49.228940 containerd[1818]: time="2025-01-29T12:00:49.228741983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.980057791s" Jan 29 12:00:49.228940 containerd[1818]: time="2025-01-29T12:00:49.228787984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:00:49.250138 containerd[1818]: time="2025-01-29T12:00:49.250092199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:00:50.618177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022805581.mount: Deactivated successfully. Jan 29 12:00:51.152138 containerd[1818]: time="2025-01-29T12:00:51.152070518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.154235 containerd[1818]: time="2025-01-29T12:00:51.154154568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:00:51.157764 containerd[1818]: time="2025-01-29T12:00:51.157708754Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.163269 containerd[1818]: time="2025-01-29T12:00:51.163203787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.164022 containerd[1818]: time="2025-01-29T12:00:51.163828502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.913682902s" Jan 29 12:00:51.164022 containerd[1818]: time="2025-01-29T12:00:51.163868903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:00:51.190466 containerd[1818]: time="2025-01-29T12:00:51.190409145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:00:51.930304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160420435.mount: Deactivated successfully. Jan 29 12:00:53.282696 containerd[1818]: time="2025-01-29T12:00:53.282623366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.286335 containerd[1818]: time="2025-01-29T12:00:53.286245354Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:00:53.290089 containerd[1818]: time="2025-01-29T12:00:53.289993444Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.296320 containerd[1818]: time="2025-01-29T12:00:53.296233495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.298192 containerd[1818]: time="2025-01-29T12:00:53.297672630Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.107201284s" Jan 29 12:00:53.298192 containerd[1818]: time="2025-01-29T12:00:53.297723131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:00:53.323337 containerd[1818]: time="2025-01-29T12:00:53.323293950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:00:53.932827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922052851.mount: Deactivated successfully. Jan 29 12:00:53.957043 containerd[1818]: time="2025-01-29T12:00:53.956958882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.959291 containerd[1818]: time="2025-01-29T12:00:53.959209936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 12:00:53.964076 containerd[1818]: time="2025-01-29T12:00:53.963993052Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.969267 containerd[1818]: time="2025-01-29T12:00:53.969192478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.970140 containerd[1818]: time="2025-01-29T12:00:53.969969896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 646.636145ms" Jan 29 12:00:53.970140 containerd[1818]: time="2025-01-29T12:00:53.970015697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:00:54.003178 containerd[1818]: time="2025-01-29T12:00:54.003091898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:00:54.670120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460374162.mount: Deactivated successfully. Jan 29 12:00:57.350645 containerd[1818]: time="2025-01-29T12:00:57.350580217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:57.353606 containerd[1818]: time="2025-01-29T12:00:57.353311462Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 12:00:57.358332 containerd[1818]: time="2025-01-29T12:00:57.357859138Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:57.362834 containerd[1818]: time="2025-01-29T12:00:57.362785419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:57.363995 containerd[1818]: time="2025-01-29T12:00:57.363945339Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.360804539s" Jan 29 12:00:57.364104 containerd[1818]: time="2025-01-29T12:00:57.363998539Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:00:58.896979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:00:58.907020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:59.058883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:59.068940 (kubelet)[2929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:59.131445 kubelet[2929]: E0129 12:00:59.130833 2929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:59.134487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:59.134714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:00.616822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:00.622756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:00.657989 systemd[1]: Reloading requested from client PID 2946 ('systemctl') (unit session-9.scope)... Jan 29 12:01:00.658203 systemd[1]: Reloading... Jan 29 12:01:00.793548 zram_generator::config[2985]: No configuration found. Jan 29 12:01:00.931406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:01.008747 systemd[1]: Reloading finished in 349 ms. Jan 29 12:01:01.058472 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:01:01.058606 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:01:01.059031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:01.065321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:01.314652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:01.325877 (kubelet)[3068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:01.369816 kubelet[3068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:01.369816 kubelet[3068]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:01.369816 kubelet[3068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:01.946522 kubelet[3068]: I0129 12:01:01.946432 3068 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:02.320752 kubelet[3068]: I0129 12:01:02.320618 3068 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:02.320752 kubelet[3068]: I0129 12:01:02.320652 3068 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:02.321157 kubelet[3068]: I0129 12:01:02.321124 3068 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:02.345084 kubelet[3068]: I0129 12:01:02.344177 3068 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:02.345084 kubelet[3068]: E0129 12:01:02.344855 3068 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.358188 kubelet[3068]: I0129 12:01:02.358151 3068 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:02.360724 kubelet[3068]: I0129 12:01:02.360663 3068 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:02.360979 kubelet[3068]: I0129 12:01:02.360721 3068 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-b5939ece28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:02.361375 kubelet[3068]: I0129 12:01:02.361351 3068 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:02.361459 kubelet[3068]: I0129 12:01:02.361381 3068 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:02.361618 kubelet[3068]: I0129 12:01:02.361595 3068 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:02.362609 kubelet[3068]: I0129 12:01:02.362587 3068 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:02.362609 kubelet[3068]: I0129 12:01:02.362613 3068 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:02.362745 kubelet[3068]: I0129 12:01:02.362651 3068 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:02.362745 kubelet[3068]: I0129 12:01:02.362674 3068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:02.368131 kubelet[3068]: W0129 12:01:02.367740 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.368131 kubelet[3068]: E0129 12:01:02.367806 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.368131 kubelet[3068]: W0129 12:01:02.367886 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b5939ece28&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.368131 kubelet[3068]: E0129 12:01:02.367923 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b5939ece28&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.368724 kubelet[3068]: I0129 12:01:02.368481 3068 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:02.371412 kubelet[3068]: I0129 12:01:02.370097 3068 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:02.371412 kubelet[3068]: W0129 12:01:02.370175 3068 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:01:02.371412 kubelet[3068]: I0129 12:01:02.371270 3068 server.go:1264] "Started kubelet" Jan 29 12:01:02.373080 kubelet[3068]: I0129 12:01:02.373051 3068 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:02.375050 kubelet[3068]: I0129 12:01:02.374171 3068 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:02.378444 kubelet[3068]: I0129 12:01:02.377540 3068 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:02.378444 kubelet[3068]: I0129 12:01:02.377874 3068 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:02.378444 kubelet[3068]: I0129 12:01:02.378061 3068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:02.378444 kubelet[3068]: E0129 12:01:02.378061 3068 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b5939ece28.181f28177340211d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b5939ece28,UID:ci-4081.3.0-a-b5939ece28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b5939ece28,},FirstTimestamp:2025-01-29 12:01:02.371242269 +0000 UTC m=+1.041147686,LastTimestamp:2025-01-29 12:01:02.371242269 +0000 UTC m=+1.041147686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b5939ece28,}" Jan 29 12:01:02.384854 kubelet[3068]: E0129 12:01:02.384822 3068 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b5939ece28\" not found" Jan 29 12:01:02.385089 kubelet[3068]: I0129 12:01:02.385073 3068 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:02.385347 kubelet[3068]: I0129 12:01:02.385331 3068 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:02.385519 kubelet[3068]: I0129 12:01:02.385507 3068 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:02.386828 kubelet[3068]: E0129 12:01:02.386792 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b5939ece28?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Jan 29 12:01:02.387097 kubelet[3068]: W0129 12:01:02.387047 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.387199 kubelet[3068]: E0129 12:01:02.387187 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.387538 kubelet[3068]: I0129 12:01:02.387519 3068 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:02.387790 kubelet[3068]: I0129 12:01:02.387703 3068 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:02.391445 kubelet[3068]: I0129 12:01:02.390863 3068 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:02.422569 kubelet[3068]: I0129 12:01:02.422510 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:02.424356 kubelet[3068]: I0129 12:01:02.424313 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:02.424356 kubelet[3068]: I0129 12:01:02.424345 3068 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:02.424565 kubelet[3068]: I0129 12:01:02.424377 3068 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:02.424565 kubelet[3068]: E0129 12:01:02.424454 3068 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:02.432112 kubelet[3068]: W0129 12:01:02.432042 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.432676 kubelet[3068]: E0129 12:01:02.432319 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:02.525262 kubelet[3068]: E0129 12:01:02.525188 3068 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:01:02.588162 kubelet[3068]: E0129 12:01:02.588004 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b5939ece28?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Jan 29 12:01:02.591254 kubelet[3068]: I0129 12:01:02.590805 3068 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.591254 kubelet[3068]: E0129 12:01:02.591200 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.591920 kubelet[3068]: I0129 12:01:02.591829 3068 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:02.591920 kubelet[3068]: I0129 12:01:02.591846 3068 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:02.591920 kubelet[3068]: I0129 12:01:02.591872 3068 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:02.597618 kubelet[3068]: I0129 12:01:02.597587 3068 policy_none.go:49] "None policy: Start" Jan 29 12:01:02.598468 kubelet[3068]: I0129 12:01:02.598449 3068 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:02.598575 kubelet[3068]: I0129 12:01:02.598560 3068 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:02.605765 kubelet[3068]: I0129 12:01:02.605721 3068 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:02.606023 kubelet[3068]: I0129 12:01:02.605981 3068 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:02.606144 kubelet[3068]: I0129 12:01:02.606128 3068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:02.610245 kubelet[3068]: E0129 12:01:02.610199 3068 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-b5939ece28\" not found" Jan 29 12:01:02.726044 kubelet[3068]: I0129 12:01:02.725950 3068 topology_manager.go:215] "Topology Admit Handler" podUID="a3928b02edf0c3dc253d2389fb97429b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.728368 kubelet[3068]: I0129 12:01:02.728326 3068 topology_manager.go:215] "Topology Admit Handler" podUID="8c692c6b5e7387a2eeb67d0dda6975e3" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.731109 kubelet[3068]: I0129 12:01:02.730317 3068 topology_manager.go:215] "Topology Admit Handler" podUID="e36ac648fb0a002878380669dab7499a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.789790 kubelet[3068]: I0129 12:01:02.789728 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790086 kubelet[3068]: I0129 12:01:02.790035 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790086 kubelet[3068]: I0129 12:01:02.790079 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790445 kubelet[3068]: I0129 12:01:02.790113 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790445 kubelet[3068]: I0129 12:01:02.790145 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3928b02edf0c3dc253d2389fb97429b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b5939ece28\" (UID: \"a3928b02edf0c3dc253d2389fb97429b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790445 kubelet[3068]: I0129 12:01:02.790177 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790445 kubelet[3068]: I0129 12:01:02.790205 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790445 kubelet[3068]: I0129 12:01:02.790242 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.790630 kubelet[3068]: I0129 12:01:02.790272 3068 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.794097 kubelet[3068]: I0129 12:01:02.794061 3068 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.794737 kubelet[3068]: E0129 12:01:02.794686 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:02.988904 kubelet[3068]: E0129 12:01:02.988840 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b5939ece28?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Jan 29 12:01:03.035384 containerd[1818]: time="2025-01-29T12:01:03.035322394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b5939ece28,Uid:a3928b02edf0c3dc253d2389fb97429b,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:03.038020 containerd[1818]: time="2025-01-29T12:01:03.037913537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b5939ece28,Uid:8c692c6b5e7387a2eeb67d0dda6975e3,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:03.039798 containerd[1818]: time="2025-01-29T12:01:03.039741468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b5939ece28,Uid:e36ac648fb0a002878380669dab7499a,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:03.199655 kubelet[3068]: I0129 12:01:03.199608 3068 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:03.200264 kubelet[3068]: E0129 12:01:03.200229 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:03.211863 kubelet[3068]: W0129 12:01:03.211790 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.211863 kubelet[3068]: E0129 12:01:03.211872 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.212090 kubelet[3068]: W0129 12:01:03.211790 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b5939ece28&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.212090 kubelet[3068]: E0129 12:01:03.211899 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b5939ece28&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.373940 kubelet[3068]: W0129 12:01:03.373772 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.373940 kubelet[3068]: E0129 12:01:03.373846 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.406968 kubelet[3068]: W0129 12:01:03.406809 3068 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.406968 kubelet[3068]: E0129 12:01:03.406981 3068 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:03.656331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398486030.mount: Deactivated successfully. Jan 29 12:01:03.685603 containerd[1818]: time="2025-01-29T12:01:03.685521444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:03.688993 containerd[1818]: time="2025-01-29T12:01:03.688936329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:03.691786 containerd[1818]: time="2025-01-29T12:01:03.691720799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:01:03.694799 containerd[1818]: time="2025-01-29T12:01:03.694756274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:03.700801 containerd[1818]: time="2025-01-29T12:01:03.700761124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:03.704723 containerd[1818]: time="2025-01-29T12:01:03.704684521Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:03.707631 containerd[1818]: time="2025-01-29T12:01:03.707544592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:03.718017 containerd[1818]: time="2025-01-29T12:01:03.717939051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:03.719103 containerd[1818]: time="2025-01-29T12:01:03.718765772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.313075ms" Jan 29 12:01:03.720320 containerd[1818]: time="2025-01-29T12:01:03.720275009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.46154ms" Jan 29 12:01:03.723802 containerd[1818]: time="2025-01-29T12:01:03.723761996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.553054ms" Jan 29 12:01:03.790166 kubelet[3068]: E0129 12:01:03.790109 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b5939ece28?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Jan 29 12:01:04.003002 kubelet[3068]: I0129 12:01:04.002861 3068 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:04.003392 kubelet[3068]: E0129 12:01:04.003347 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:04.406518 kubelet[3068]: E0129 12:01:04.406479 3068 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jan 29 12:01:04.542754 containerd[1818]: time="2025-01-29T12:01:04.541117376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:04.543504 containerd[1818]: time="2025-01-29T12:01:04.543384760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:04.545155 containerd[1818]: time="2025-01-29T12:01:04.544797550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.545155 containerd[1818]: time="2025-01-29T12:01:04.544969349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.545785 containerd[1818]: time="2025-01-29T12:01:04.545694644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:04.546125 containerd[1818]: time="2025-01-29T12:01:04.545773043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:04.546125 containerd[1818]: time="2025-01-29T12:01:04.545918842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.546125 containerd[1818]: time="2025-01-29T12:01:04.546054941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.547890 containerd[1818]: time="2025-01-29T12:01:04.547534231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:04.549041 containerd[1818]: time="2025-01-29T12:01:04.548931521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:04.549041 containerd[1818]: time="2025-01-29T12:01:04.548962120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.549334 containerd[1818]: time="2025-01-29T12:01:04.549286018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:04.679599 containerd[1818]: time="2025-01-29T12:01:04.679447188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b5939ece28,Uid:8c692c6b5e7387a2eeb67d0dda6975e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cbbcaf9caaf84a1de74da23fda08604a1e8f1020c65cf472eae1aa01f046d87\"" Jan 29 12:01:04.680730 containerd[1818]: time="2025-01-29T12:01:04.680678479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b5939ece28,Uid:e36ac648fb0a002878380669dab7499a,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e8856f6547ccb42f18aeb2aac881c0c2b864c7ebdd254e0c9e0875d9b4aaa2\"" Jan 29 12:01:04.694770 containerd[1818]: time="2025-01-29T12:01:04.694718078Z" level=info msg="CreateContainer within sandbox \"8cbbcaf9caaf84a1de74da23fda08604a1e8f1020c65cf472eae1aa01f046d87\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:01:04.694972 containerd[1818]: time="2025-01-29T12:01:04.694945077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b5939ece28,Uid:a3928b02edf0c3dc253d2389fb97429b,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c041ccff2b3b65630329f49d8b241e127453d109ebcd165496775c820153e3\"" Jan 29 12:01:04.696613 containerd[1818]: time="2025-01-29T12:01:04.696573465Z" level=info msg="CreateContainer within sandbox \"52e8856f6547ccb42f18aeb2aac881c0c2b864c7ebdd254e0c9e0875d9b4aaa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:01:04.715938 containerd[1818]: time="2025-01-29T12:01:04.715841627Z" level=info msg="CreateContainer within sandbox \"79c041ccff2b3b65630329f49d8b241e127453d109ebcd165496775c820153e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:01:04.765251 containerd[1818]: time="2025-01-29T12:01:04.765186875Z" level=info msg="CreateContainer within sandbox \"8cbbcaf9caaf84a1de74da23fda08604a1e8f1020c65cf472eae1aa01f046d87\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7fc166dfa75c1b36aae44ed11452de9e631d754863adc5c833ae09bf20b80632\"" Jan 29 12:01:04.766087 containerd[1818]: time="2025-01-29T12:01:04.766049369Z" level=info msg="StartContainer for \"7fc166dfa75c1b36aae44ed11452de9e631d754863adc5c833ae09bf20b80632\"" Jan 29 12:01:04.787882 containerd[1818]: time="2025-01-29T12:01:04.787831413Z" level=info msg="CreateContainer within sandbox \"79c041ccff2b3b65630329f49d8b241e127453d109ebcd165496775c820153e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be2c97d551d746bbda05f5246568914718bb0376e4c8e7454944723a67cd739b\"" Jan 29 12:01:04.789867 containerd[1818]: time="2025-01-29T12:01:04.788777206Z" level=info msg="StartContainer for \"be2c97d551d746bbda05f5246568914718bb0376e4c8e7454944723a67cd739b\"" Jan 29 12:01:04.800532 containerd[1818]: time="2025-01-29T12:01:04.800472122Z" level=info msg="CreateContainer within sandbox \"52e8856f6547ccb42f18aeb2aac881c0c2b864c7ebdd254e0c9e0875d9b4aaa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5672eaf7e433e9fc152b2fafca045d0ad45d751df447f0e10df93d62b533b26\"" Jan 29 12:01:04.801556 containerd[1818]: time="2025-01-29T12:01:04.801525015Z" level=info msg="StartContainer for \"a5672eaf7e433e9fc152b2fafca045d0ad45d751df447f0e10df93d62b533b26\"" Jan 29 12:01:04.917040 containerd[1818]: time="2025-01-29T12:01:04.916901090Z" level=info msg="StartContainer for \"7fc166dfa75c1b36aae44ed11452de9e631d754863adc5c833ae09bf20b80632\" returns successfully" Jan 29 12:01:04.950337 containerd[1818]: time="2025-01-29T12:01:04.950164052Z" level=info msg="StartContainer for \"be2c97d551d746bbda05f5246568914718bb0376e4c8e7454944723a67cd739b\" returns successfully" Jan 29 12:01:05.036376 containerd[1818]: time="2025-01-29T12:01:05.036314237Z" level=info msg="StartContainer for \"a5672eaf7e433e9fc152b2fafca045d0ad45d751df447f0e10df93d62b533b26\" returns successfully" Jan 29 12:01:05.608589 kubelet[3068]: I0129 12:01:05.608550 3068 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:07.046783 kubelet[3068]: E0129 12:01:07.046711 3068 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-b5939ece28\" not found" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:07.128271 kubelet[3068]: I0129 12:01:07.128214 3068 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:07.368559 kubelet[3068]: I0129 12:01:07.367983 3068 apiserver.go:52] "Watching apiserver" Jan 29 12:01:07.386410 kubelet[3068]: I0129 12:01:07.386317 3068 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:09.400621 systemd[1]: Reloading requested from client PID 3341 ('systemctl') (unit session-9.scope)... Jan 29 12:01:09.401553 systemd[1]: Reloading... Jan 29 12:01:09.566317 zram_generator::config[3384]: No configuration found. Jan 29 12:01:09.692301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:09.777656 systemd[1]: Reloading finished in 373 ms. Jan 29 12:01:09.809412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:09.831066 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:01:09.831764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:09.839800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:09.992669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:10.002822 (kubelet)[3458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:10.583028 kubelet[3458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:10.583028 kubelet[3458]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:10.583028 kubelet[3458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:10.585102 kubelet[3458]: I0129 12:01:10.583040 3458 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:10.596081 kubelet[3458]: I0129 12:01:10.596002 3458 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:10.596081 kubelet[3458]: I0129 12:01:10.596035 3458 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:10.596643 kubelet[3458]: I0129 12:01:10.596342 3458 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:10.598052 kubelet[3458]: I0129 12:01:10.598025 3458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:01:10.601455 kubelet[3458]: I0129 12:01:10.599599 3458 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:10.607135 kubelet[3458]: I0129 12:01:10.607105 3458 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:10.607713 kubelet[3458]: I0129 12:01:10.607670 3458 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:10.607916 kubelet[3458]: I0129 12:01:10.607713 3458 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-b5939ece28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:10.608086 kubelet[3458]: I0129 12:01:10.607941 3458 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:10.608086 kubelet[3458]: I0129 12:01:10.607956 3458 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:10.608086 kubelet[3458]: I0129 12:01:10.608021 3458 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:10.608249 kubelet[3458]: I0129 12:01:10.608171 3458 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:10.610473 kubelet[3458]: I0129 12:01:10.608190 3458 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:10.610473 kubelet[3458]: I0129 12:01:10.608893 3458 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:10.610473 kubelet[3458]: I0129 12:01:10.610436 3458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:10.611580 kubelet[3458]: I0129 12:01:10.611509 3458 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:10.612100 kubelet[3458]: I0129 12:01:10.611727 3458 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:10.612650 kubelet[3458]: I0129 12:01:10.612202 3458 server.go:1264] "Started kubelet" Jan 29 12:01:10.621128 kubelet[3458]: I0129 12:01:10.621091 3458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:10.622429 kubelet[3458]: I0129 12:01:10.622050 3458 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:10.631471 kubelet[3458]: I0129 12:01:10.630484 3458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:10.632007 kubelet[3458]: I0129 12:01:10.631989 3458 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:10.642963 kubelet[3458]: I0129 12:01:10.642934 3458 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:10.648599 kubelet[3458]: I0129 12:01:10.648567 3458 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:10.657711 kubelet[3458]: I0129 12:01:10.657627 3458 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:10.658124 kubelet[3458]: I0129 12:01:10.658110 3458 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:10.661821 kubelet[3458]: I0129 12:01:10.660415 3458 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:10.661821 kubelet[3458]: I0129 12:01:10.661605 3458 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:10.665779 kubelet[3458]: I0129 12:01:10.665382 3458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:10.669256 kubelet[3458]: I0129 12:01:10.669231 3458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:10.670028 kubelet[3458]: I0129 12:01:10.669373 3458 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:10.670028 kubelet[3458]: I0129 12:01:10.669400 3458 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:10.670808 kubelet[3458]: E0129 12:01:10.670778 3458 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:10.680820 kubelet[3458]: E0129 12:01:10.679920 3458 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:10.680967 kubelet[3458]: I0129 12:01:10.680928 3458 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:10.752606 kubelet[3458]: I0129 12:01:10.750903 3458 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.764290 kubelet[3458]: I0129 12:01:10.764239 3458 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:10.764290 kubelet[3458]: I0129 12:01:10.764270 3458 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:10.764290 kubelet[3458]: I0129 12:01:10.764304 3458 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:10.767170 kubelet[3458]: I0129 12:01:10.766170 3458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:01:10.767170 kubelet[3458]: I0129 12:01:10.766194 3458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:01:10.767170 kubelet[3458]: I0129 12:01:10.766222 3458 policy_none.go:49] "None policy: Start" Jan 29 12:01:10.768194 kubelet[3458]: I0129 12:01:10.768165 3458 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:10.768316 kubelet[3458]: I0129 12:01:10.768208 3458 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:10.769372 kubelet[3458]: I0129 12:01:10.769344 3458 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.769507 kubelet[3458]: I0129 12:01:10.769459 3458 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.770478 kubelet[3458]: I0129 12:01:10.769707 3458 state_mem.go:75] "Updated machine memory state" Jan 29 12:01:10.773551 kubelet[3458]: I0129 12:01:10.773531 3458 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:10.774540 kubelet[3458]: I0129 12:01:10.773816 3458 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:10.774540 kubelet[3458]: I0129 12:01:10.773939 3458 topology_manager.go:215] "Topology Admit Handler" podUID="8c692c6b5e7387a2eeb67d0dda6975e3" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.774540 kubelet[3458]: I0129 12:01:10.774023 3458 topology_manager.go:215] "Topology Admit Handler" podUID="e36ac648fb0a002878380669dab7499a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.774540 kubelet[3458]: I0129 12:01:10.774093 3458 topology_manager.go:215] "Topology Admit Handler" podUID="a3928b02edf0c3dc253d2389fb97429b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.781300 kubelet[3458]: I0129 12:01:10.779534 3458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:10.800756 kubelet[3458]: W0129 12:01:10.800620 3458 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:01:10.805798 kubelet[3458]: W0129 12:01:10.803958 3458 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:01:10.805798 kubelet[3458]: W0129 12:01:10.804031 3458 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:01:10.860477 kubelet[3458]: I0129 12:01:10.860315 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860477 kubelet[3458]: I0129 12:01:10.860370 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860477 kubelet[3458]: I0129 12:01:10.860400 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860477 kubelet[3458]: I0129 12:01:10.860436 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860477 kubelet[3458]: I0129 12:01:10.860462 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3928b02edf0c3dc253d2389fb97429b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b5939ece28\" (UID: \"a3928b02edf0c3dc253d2389fb97429b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860766 kubelet[3458]: I0129 12:01:10.860483 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860766 kubelet[3458]: I0129 12:01:10.860505 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c692c6b5e7387a2eeb67d0dda6975e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b5939ece28\" (UID: \"8c692c6b5e7387a2eeb67d0dda6975e3\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860766 kubelet[3458]: I0129 12:01:10.860525 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:10.860766 kubelet[3458]: I0129 12:01:10.860547 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36ac648fb0a002878380669dab7499a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b5939ece28\" (UID: \"e36ac648fb0a002878380669dab7499a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" Jan 29 12:01:11.613320 kubelet[3458]: I0129 12:01:11.613253 3458 apiserver.go:52] "Watching apiserver" Jan 29 12:01:11.658550 kubelet[3458]: I0129 12:01:11.658478 3458 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:11.844320 kubelet[3458]: I0129 12:01:11.842130 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b5939ece28" podStartSLOduration=1.8421055050000001 podStartE2EDuration="1.842105505s" podCreationTimestamp="2025-01-29 12:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:11.841808498 +0000 UTC m=+1.834124435" watchObservedRunningTime="2025-01-29 12:01:11.842105505 +0000 UTC m=+1.834421442" Jan 29 12:01:11.844320 kubelet[3458]: I0129 12:01:11.842270 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b5939ece28" podStartSLOduration=1.842263908 podStartE2EDuration="1.842263908s" podCreationTimestamp="2025-01-29 12:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:11.812665546 +0000 UTC m=+1.804981483" watchObservedRunningTime="2025-01-29 12:01:11.842263908 +0000 UTC m=+1.834579845" Jan 29 12:01:11.900435 kubelet[3458]: I0129 12:01:11.900352 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b5939ece28" podStartSLOduration=1.9003252069999998 podStartE2EDuration="1.900325207s" podCreationTimestamp="2025-01-29 12:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:11.864761912 +0000 UTC m=+1.857077849" watchObservedRunningTime="2025-01-29 12:01:11.900325207 +0000 UTC m=+1.892641144" Jan 29 12:01:16.134046 sudo[2408]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:16.239618 sshd[2404]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:16.247653 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:35442.service: Deactivated successfully. Jan 29 12:01:16.251840 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:01:16.253264 systemd-logind[1789]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:01:16.255273 systemd-logind[1789]: Removed session 9. Jan 29 12:01:22.889185 kubelet[3458]: I0129 12:01:22.889132 3458 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:01:22.889818 containerd[1818]: time="2025-01-29T12:01:22.889664492Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:01:22.890213 kubelet[3458]: I0129 12:01:22.889864 3458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:01:24.156454 kubelet[3458]: I0129 12:01:24.154769 3458 topology_manager.go:215] "Topology Admit Handler" podUID="1bb49aa7-39a8-4521-8e84-039aea3880cb" podNamespace="kube-system" podName="kube-proxy-vmdfj" Jan 29 12:01:24.260072 kubelet[3458]: I0129 12:01:24.259971 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bb49aa7-39a8-4521-8e84-039aea3880cb-xtables-lock\") pod \"kube-proxy-vmdfj\" (UID: \"1bb49aa7-39a8-4521-8e84-039aea3880cb\") " pod="kube-system/kube-proxy-vmdfj" Jan 29 12:01:24.260072 kubelet[3458]: I0129 12:01:24.260029 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bb49aa7-39a8-4521-8e84-039aea3880cb-lib-modules\") pod \"kube-proxy-vmdfj\" (UID: \"1bb49aa7-39a8-4521-8e84-039aea3880cb\") " pod="kube-system/kube-proxy-vmdfj" Jan 29 12:01:24.260339 kubelet[3458]: I0129 12:01:24.260123 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bb49aa7-39a8-4521-8e84-039aea3880cb-kube-proxy\") pod \"kube-proxy-vmdfj\" (UID: \"1bb49aa7-39a8-4521-8e84-039aea3880cb\") " pod="kube-system/kube-proxy-vmdfj" Jan 29 12:01:24.260339 kubelet[3458]: I0129 12:01:24.260162 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fqq\" (UniqueName: \"kubernetes.io/projected/1bb49aa7-39a8-4521-8e84-039aea3880cb-kube-api-access-h2fqq\") pod \"kube-proxy-vmdfj\" (UID: \"1bb49aa7-39a8-4521-8e84-039aea3880cb\") " pod="kube-system/kube-proxy-vmdfj" Jan 29 12:01:24.341009 kubelet[3458]: I0129 12:01:24.340947 3458 topology_manager.go:215] "Topology Admit Handler" podUID="40bae078-ad08-44cf-b74d-f9fe6f1a03aa" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-rmkv5" Jan 29 12:01:24.360970 kubelet[3458]: I0129 12:01:24.360926 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wvj\" (UniqueName: \"kubernetes.io/projected/40bae078-ad08-44cf-b74d-f9fe6f1a03aa-kube-api-access-z5wvj\") pod \"tigera-operator-7bc55997bb-rmkv5\" (UID: \"40bae078-ad08-44cf-b74d-f9fe6f1a03aa\") " pod="tigera-operator/tigera-operator-7bc55997bb-rmkv5" Jan 29 12:01:24.360970 kubelet[3458]: I0129 12:01:24.360976 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40bae078-ad08-44cf-b74d-f9fe6f1a03aa-var-lib-calico\") pod \"tigera-operator-7bc55997bb-rmkv5\" (UID: \"40bae078-ad08-44cf-b74d-f9fe6f1a03aa\") " pod="tigera-operator/tigera-operator-7bc55997bb-rmkv5" Jan 29 12:01:24.460225 containerd[1818]: time="2025-01-29T12:01:24.460096517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmdfj,Uid:1bb49aa7-39a8-4521-8e84-039aea3880cb,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:24.559280 containerd[1818]: time="2025-01-29T12:01:24.559172146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:24.559508 containerd[1818]: time="2025-01-29T12:01:24.559249948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:24.561226 containerd[1818]: time="2025-01-29T12:01:24.561152493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:24.561613 containerd[1818]: time="2025-01-29T12:01:24.561535602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:24.625038 containerd[1818]: time="2025-01-29T12:01:24.624991094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmdfj,Uid:1bb49aa7-39a8-4521-8e84-039aea3880cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"22adb0ac9f84e1fe4d78543758588b7a16490cd233266446c0670f8a1f4d3ff3\"" Jan 29 12:01:24.629086 containerd[1818]: time="2025-01-29T12:01:24.629041989Z" level=info msg="CreateContainer within sandbox \"22adb0ac9f84e1fe4d78543758588b7a16490cd233266446c0670f8a1f4d3ff3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:01:24.650344 containerd[1818]: time="2025-01-29T12:01:24.650282589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rmkv5,Uid:40bae078-ad08-44cf-b74d-f9fe6f1a03aa,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:01:24.696059 containerd[1818]: time="2025-01-29T12:01:24.696010964Z" level=info msg="CreateContainer within sandbox \"22adb0ac9f84e1fe4d78543758588b7a16490cd233266446c0670f8a1f4d3ff3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e314dd90981a698945ecff5bc44fa4e2a056e1124d7f8118b61fd6a5737ff4f\"" Jan 29 12:01:24.696952 containerd[1818]: time="2025-01-29T12:01:24.696912285Z" level=info msg="StartContainer for \"2e314dd90981a698945ecff5bc44fa4e2a056e1124d7f8118b61fd6a5737ff4f\"" Jan 29 12:01:24.724479 containerd[1818]: time="2025-01-29T12:01:24.723608613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:24.724479 containerd[1818]: time="2025-01-29T12:01:24.723695415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:24.724479 containerd[1818]: time="2025-01-29T12:01:24.723710915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:24.724479 containerd[1818]: time="2025-01-29T12:01:24.723817318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:24.801181 containerd[1818]: time="2025-01-29T12:01:24.800674325Z" level=info msg="StartContainer for \"2e314dd90981a698945ecff5bc44fa4e2a056e1124d7f8118b61fd6a5737ff4f\" returns successfully" Jan 29 12:01:24.824444 containerd[1818]: time="2025-01-29T12:01:24.824374282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rmkv5,Uid:40bae078-ad08-44cf-b74d-f9fe6f1a03aa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"815f1667d487fa53b5dfd79cd703dc15a98067987e672df42cb9405e45061854\"" Jan 29 12:01:24.828394 containerd[1818]: time="2025-01-29T12:01:24.827984167Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:01:25.399587 systemd[1]: run-containerd-runc-k8s.io-22adb0ac9f84e1fe4d78543758588b7a16490cd233266446c0670f8a1f4d3ff3-runc.h9Lnlw.mount: Deactivated successfully. Jan 29 12:01:25.768129 kubelet[3458]: I0129 12:01:25.767873 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmdfj" podStartSLOduration=2.767847266 podStartE2EDuration="2.767847266s" podCreationTimestamp="2025-01-29 12:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:25.767577359 +0000 UTC m=+15.759893296" watchObservedRunningTime="2025-01-29 12:01:25.767847266 +0000 UTC m=+15.760163203" Jan 29 12:01:26.792301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469733232.mount: Deactivated successfully. Jan 29 12:01:27.545240 containerd[1818]: time="2025-01-29T12:01:27.545159655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.546750 containerd[1818]: time="2025-01-29T12:01:27.546696091Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:01:27.549651 containerd[1818]: time="2025-01-29T12:01:27.549588859Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.555255 containerd[1818]: time="2025-01-29T12:01:27.555198091Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.556161 containerd[1818]: time="2025-01-29T12:01:27.555994809Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.727950541s" Jan 29 12:01:27.556161 containerd[1818]: time="2025-01-29T12:01:27.556037710Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:01:27.559258 containerd[1818]: time="2025-01-29T12:01:27.559227385Z" level=info msg="CreateContainer within sandbox \"815f1667d487fa53b5dfd79cd703dc15a98067987e672df42cb9405e45061854\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:01:27.599967 containerd[1818]: time="2025-01-29T12:01:27.599900842Z" level=info msg="CreateContainer within sandbox \"815f1667d487fa53b5dfd79cd703dc15a98067987e672df42cb9405e45061854\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3dc540acc53d88ba13da4162092db7c2af3cdf820cbdf849eb8457ca2db7d8f8\"" Jan 29 12:01:27.601207 containerd[1818]: time="2025-01-29T12:01:27.600767162Z" level=info msg="StartContainer for \"3dc540acc53d88ba13da4162092db7c2af3cdf820cbdf849eb8457ca2db7d8f8\"" Jan 29 12:01:27.667383 containerd[1818]: time="2025-01-29T12:01:27.667284126Z" level=info msg="StartContainer for \"3dc540acc53d88ba13da4162092db7c2af3cdf820cbdf849eb8457ca2db7d8f8\" returns successfully" Jan 29 12:01:30.781878 kubelet[3458]: I0129 12:01:30.781795 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-rmkv5" podStartSLOduration=5.051605266 podStartE2EDuration="7.781760759s" podCreationTimestamp="2025-01-29 12:01:23 +0000 UTC" firstStartedPulling="2025-01-29 12:01:24.827133047 +0000 UTC m=+14.819448984" lastFinishedPulling="2025-01-29 12:01:27.55728854 +0000 UTC m=+17.549604477" observedRunningTime="2025-01-29 12:01:27.779987076 +0000 UTC m=+17.772303013" watchObservedRunningTime="2025-01-29 12:01:30.781760759 +0000 UTC m=+20.774076796" Jan 29 12:01:32.034881 kubelet[3458]: I0129 12:01:32.034814 3458 topology_manager.go:215] "Topology Admit Handler" podUID="492a5f71-68f0-429d-a995-0ecfb42b8bd0" podNamespace="calico-system" podName="calico-typha-76cbb864cc-2rggj" Jan 29 12:01:32.109492 kubelet[3458]: I0129 12:01:32.109445 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492a5f71-68f0-429d-a995-0ecfb42b8bd0-tigera-ca-bundle\") pod \"calico-typha-76cbb864cc-2rggj\" (UID: \"492a5f71-68f0-429d-a995-0ecfb42b8bd0\") " pod="calico-system/calico-typha-76cbb864cc-2rggj" Jan 29 12:01:32.109748 kubelet[3458]: I0129 12:01:32.109722 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/492a5f71-68f0-429d-a995-0ecfb42b8bd0-typha-certs\") pod \"calico-typha-76cbb864cc-2rggj\" (UID: \"492a5f71-68f0-429d-a995-0ecfb42b8bd0\") " pod="calico-system/calico-typha-76cbb864cc-2rggj" Jan 29 12:01:32.109860 kubelet[3458]: I0129 12:01:32.109761 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75z4n\" (UniqueName: \"kubernetes.io/projected/492a5f71-68f0-429d-a995-0ecfb42b8bd0-kube-api-access-75z4n\") pod \"calico-typha-76cbb864cc-2rggj\" (UID: \"492a5f71-68f0-429d-a995-0ecfb42b8bd0\") " pod="calico-system/calico-typha-76cbb864cc-2rggj" Jan 29 12:01:32.280457 kubelet[3458]: I0129 12:01:32.280344 3458 topology_manager.go:215] "Topology Admit Handler" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" podNamespace="calico-system" podName="calico-node-5mwpl" Jan 29 12:01:32.312160 kubelet[3458]: I0129 12:01:32.311582 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-xtables-lock\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312160 kubelet[3458]: I0129 12:01:32.311632 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-tigera-ca-bundle\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312160 kubelet[3458]: I0129 12:01:32.311665 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-policysync\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312160 kubelet[3458]: I0129 12:01:32.311694 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-run-calico\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312160 kubelet[3458]: I0129 12:01:32.311719 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-net-dir\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312521 kubelet[3458]: I0129 12:01:32.311743 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-flexvol-driver-host\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312521 kubelet[3458]: I0129 12:01:32.311768 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-node-certs\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312521 kubelet[3458]: I0129 12:01:32.311793 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-bin-dir\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312521 kubelet[3458]: I0129 12:01:32.311815 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-lib-modules\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312521 kubelet[3458]: I0129 12:01:32.311839 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-lib-calico\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312707 kubelet[3458]: I0129 12:01:32.311865 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-log-dir\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.312707 kubelet[3458]: I0129 12:01:32.311888 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vh4h\" (UniqueName: \"kubernetes.io/projected/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-kube-api-access-5vh4h\") pod \"calico-node-5mwpl\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " pod="calico-system/calico-node-5mwpl" Jan 29 12:01:32.355458 containerd[1818]: time="2025-01-29T12:01:32.352811860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76cbb864cc-2rggj,Uid:492a5f71-68f0-429d-a995-0ecfb42b8bd0,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:32.424453 kubelet[3458]: E0129 12:01:32.422638 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.424453 kubelet[3458]: W0129 12:01:32.422674 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.424453 kubelet[3458]: E0129 12:01:32.424282 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.424453 kubelet[3458]: E0129 12:01:32.424392 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.424453 kubelet[3458]: W0129 12:01:32.424407 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.424453 kubelet[3458]: E0129 12:01:32.424453 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.426196 kubelet[3458]: E0129 12:01:32.426166 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.426196 kubelet[3458]: W0129 12:01:32.426189 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.426396 kubelet[3458]: E0129 12:01:32.426231 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.426488 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431362 kubelet[3458]: W0129 12:01:32.426500 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.428609 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.428820 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431362 kubelet[3458]: W0129 12:01:32.428830 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.428914 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.429054 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431362 kubelet[3458]: W0129 12:01:32.429062 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.429144 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431362 kubelet[3458]: E0129 12:01:32.429270 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431816 kubelet[3458]: W0129 12:01:32.429278 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429357 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429497 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431816 kubelet[3458]: W0129 12:01:32.429506 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429592 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429754 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431816 kubelet[3458]: W0129 12:01:32.429764 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429857 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.431816 kubelet[3458]: E0129 12:01:32.429973 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.431816 kubelet[3458]: W0129 12:01:32.429981 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430081 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430202 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.433741 kubelet[3458]: W0129 12:01:32.430211 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430304 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430450 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.433741 kubelet[3458]: W0129 12:01:32.430458 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430548 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430666 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.433741 kubelet[3458]: W0129 12:01:32.430673 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.433741 kubelet[3458]: E0129 12:01:32.430854 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.434201 kubelet[3458]: E0129 12:01:32.432917 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.434201 kubelet[3458]: W0129 12:01:32.432931 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.434201 kubelet[3458]: E0129 12:01:32.433244 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.434579 kubelet[3458]: E0129 12:01:32.434536 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.434579 kubelet[3458]: W0129 12:01:32.434553 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.434867 kubelet[3458]: E0129 12:01:32.434656 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.434867 kubelet[3458]: E0129 12:01:32.434816 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.434867 kubelet[3458]: W0129 12:01:32.434827 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.435232 kubelet[3458]: E0129 12:01:32.434912 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.435627 kubelet[3458]: E0129 12:01:32.435495 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.435627 kubelet[3458]: W0129 12:01:32.435509 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.436879 kubelet[3458]: E0129 12:01:32.436809 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.437455 kubelet[3458]: E0129 12:01:32.437019 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.437455 kubelet[3458]: W0129 12:01:32.437036 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.437455 kubelet[3458]: E0129 12:01:32.437215 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.437455 kubelet[3458]: E0129 12:01:32.437296 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.437455 kubelet[3458]: W0129 12:01:32.437306 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.438838 kubelet[3458]: E0129 12:01:32.438781 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.439104 kubelet[3458]: E0129 12:01:32.439059 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.439104 kubelet[3458]: W0129 12:01:32.439074 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.441443 kubelet[3458]: E0129 12:01:32.441214 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.446890 kubelet[3458]: E0129 12:01:32.446838 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.446890 kubelet[3458]: W0129 12:01:32.446863 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.447822 kubelet[3458]: E0129 12:01:32.447708 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.447822 kubelet[3458]: W0129 12:01:32.447732 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.448882 kubelet[3458]: E0129 12:01:32.448795 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.448882 kubelet[3458]: W0129 12:01:32.448810 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.449831 kubelet[3458]: E0129 12:01:32.449733 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.449831 kubelet[3458]: W0129 12:01:32.449747 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.451144 kubelet[3458]: E0129 12:01:32.450634 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.451144 kubelet[3458]: W0129 12:01:32.450652 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.452130 kubelet[3458]: E0129 12:01:32.451484 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.452130 kubelet[3458]: W0129 12:01:32.451500 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.452130 kubelet[3458]: E0129 12:01:32.451522 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.452130 kubelet[3458]: E0129 12:01:32.451571 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.453424 kubelet[3458]: E0129 12:01:32.453228 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.453424 kubelet[3458]: E0129 12:01:32.453272 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.453424 kubelet[3458]: E0129 12:01:32.453321 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.453424 kubelet[3458]: E0129 12:01:32.453357 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.456852 kubelet[3458]: E0129 12:01:32.456286 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.456852 kubelet[3458]: W0129 12:01:32.456311 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.456852 kubelet[3458]: E0129 12:01:32.456337 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.456852 kubelet[3458]: E0129 12:01:32.456729 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.456852 kubelet[3458]: W0129 12:01:32.456740 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.456852 kubelet[3458]: E0129 12:01:32.456755 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.460522 kubelet[3458]: E0129 12:01:32.459661 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.460522 kubelet[3458]: W0129 12:01:32.459678 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.460522 kubelet[3458]: E0129 12:01:32.459698 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.469332 kubelet[3458]: E0129 12:01:32.469308 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.470314 kubelet[3458]: W0129 12:01:32.470176 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.470314 kubelet[3458]: E0129 12:01:32.470240 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.474690 containerd[1818]: time="2025-01-29T12:01:32.470976611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:32.474690 containerd[1818]: time="2025-01-29T12:01:32.471065313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:32.474690 containerd[1818]: time="2025-01-29T12:01:32.471093413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.474690 containerd[1818]: time="2025-01-29T12:01:32.471497923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.513919 kubelet[3458]: E0129 12:01:32.513883 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.513919 kubelet[3458]: W0129 12:01:32.513907 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.513919 kubelet[3458]: E0129 12:01:32.513934 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.548771 kubelet[3458]: E0129 12:01:32.548730 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.548771 kubelet[3458]: W0129 12:01:32.548756 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.549001 kubelet[3458]: E0129 12:01:32.548786 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.550348 containerd[1818]: time="2025-01-29T12:01:32.550281424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76cbb864cc-2rggj,Uid:492a5f71-68f0-429d-a995-0ecfb42b8bd0,Namespace:calico-system,Attempt:0,} returns sandbox id \"309ea34dafb8f86906190b9076a3e2e5b89d37ad99c049c6a4ca05c9e733473a\"" Jan 29 12:01:32.554502 containerd[1818]: time="2025-01-29T12:01:32.554457525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:01:32.597145 containerd[1818]: time="2025-01-29T12:01:32.593078756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5mwpl,Uid:61c07a8a-f154-4bb5-9a31-5f0a71d7b709,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:32.597405 kubelet[3458]: I0129 12:01:32.595517 3458 topology_manager.go:215] "Topology Admit Handler" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" podNamespace="calico-system" podName="csi-node-driver-qjbtv" Jan 29 12:01:32.597405 kubelet[3458]: E0129 12:01:32.596590 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:32.603602 kubelet[3458]: E0129 12:01:32.603555 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.603602 kubelet[3458]: W0129 12:01:32.603594 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.603602 kubelet[3458]: E0129 12:01:32.603629 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.604001 kubelet[3458]: E0129 12:01:32.603968 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.604001 kubelet[3458]: W0129 12:01:32.603991 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604010 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604234 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.607095 kubelet[3458]: W0129 12:01:32.604245 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604265 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604612 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.607095 kubelet[3458]: W0129 12:01:32.604625 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604641 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604854 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.607095 kubelet[3458]: W0129 12:01:32.604865 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.607095 kubelet[3458]: E0129 12:01:32.604876 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605065 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.609982 kubelet[3458]: W0129 12:01:32.605076 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605088 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605276 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.609982 kubelet[3458]: W0129 12:01:32.605285 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605297 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605532 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.609982 kubelet[3458]: W0129 12:01:32.605543 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605555 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.609982 kubelet[3458]: E0129 12:01:32.605782 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.610363 kubelet[3458]: W0129 12:01:32.605793 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.605809 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.606052 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.610363 kubelet[3458]: W0129 12:01:32.606066 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.606078 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.606280 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.610363 kubelet[3458]: W0129 12:01:32.606291 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.606312 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.610363 kubelet[3458]: E0129 12:01:32.606575 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.610363 kubelet[3458]: W0129 12:01:32.606595 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.606612 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.606851 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.611691 kubelet[3458]: W0129 12:01:32.606862 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.606883 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.607096 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.611691 kubelet[3458]: W0129 12:01:32.607106 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.607132 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.607367 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.611691 kubelet[3458]: W0129 12:01:32.607379 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.611691 kubelet[3458]: E0129 12:01:32.607394 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.607628 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.612091 kubelet[3458]: W0129 12:01:32.607640 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.607659 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.607902 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.612091 kubelet[3458]: W0129 12:01:32.607914 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.607934 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.608477 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.612091 kubelet[3458]: W0129 12:01:32.608494 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.608510 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.612091 kubelet[3458]: E0129 12:01:32.608769 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.614747 kubelet[3458]: W0129 12:01:32.608780 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.614747 kubelet[3458]: E0129 12:01:32.608807 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.614747 kubelet[3458]: E0129 12:01:32.609056 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.614747 kubelet[3458]: W0129 12:01:32.609364 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.614747 kubelet[3458]: E0129 12:01:32.609382 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.615318 kubelet[3458]: E0129 12:01:32.615235 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.615318 kubelet[3458]: W0129 12:01:32.615288 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.615425 kubelet[3458]: E0129 12:01:32.615315 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.615425 kubelet[3458]: I0129 12:01:32.615356 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8235b86-2d3c-40e3-bcdb-97985970fdbe-kubelet-dir\") pod \"csi-node-driver-qjbtv\" (UID: \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\") " pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:32.615726 kubelet[3458]: E0129 12:01:32.615696 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.615726 kubelet[3458]: W0129 12:01:32.615717 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.615846 kubelet[3458]: E0129 12:01:32.615735 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.615846 kubelet[3458]: I0129 12:01:32.615771 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8235b86-2d3c-40e3-bcdb-97985970fdbe-varrun\") pod \"csi-node-driver-qjbtv\" (UID: \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\") " pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:32.616644 kubelet[3458]: E0129 12:01:32.616450 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.616644 kubelet[3458]: W0129 12:01:32.616476 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.616644 kubelet[3458]: E0129 12:01:32.616503 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.616644 kubelet[3458]: I0129 12:01:32.616530 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8235b86-2d3c-40e3-bcdb-97985970fdbe-socket-dir\") pod \"csi-node-driver-qjbtv\" (UID: \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\") " pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:32.616863 kubelet[3458]: E0129 12:01:32.616817 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.616863 kubelet[3458]: W0129 12:01:32.616830 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.616863 kubelet[3458]: E0129 12:01:32.616847 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.618455 kubelet[3458]: E0129 12:01:32.617094 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.618455 kubelet[3458]: W0129 12:01:32.617108 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.618455 kubelet[3458]: E0129 12:01:32.617135 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.618455 kubelet[3458]: E0129 12:01:32.618264 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.618455 kubelet[3458]: W0129 12:01:32.618278 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.618455 kubelet[3458]: E0129 12:01:32.618301 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.619795 kubelet[3458]: E0129 12:01:32.618913 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.619795 kubelet[3458]: W0129 12:01:32.618929 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.619795 kubelet[3458]: E0129 12:01:32.619013 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.619795 kubelet[3458]: E0129 12:01:32.619200 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.619795 kubelet[3458]: W0129 12:01:32.619209 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.619795 kubelet[3458]: E0129 12:01:32.619274 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.619795 kubelet[3458]: I0129 12:01:32.619298 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8235b86-2d3c-40e3-bcdb-97985970fdbe-registration-dir\") pod \"csi-node-driver-qjbtv\" (UID: \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\") " pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:32.619795 kubelet[3458]: E0129 12:01:32.619513 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.619795 kubelet[3458]: W0129 12:01:32.619541 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.622376 kubelet[3458]: E0129 12:01:32.619557 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.622376 kubelet[3458]: E0129 12:01:32.619774 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.622376 kubelet[3458]: W0129 12:01:32.619784 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.622376 kubelet[3458]: E0129 12:01:32.619807 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.622376 kubelet[3458]: E0129 12:01:32.620038 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.622376 kubelet[3458]: W0129 12:01:32.620050 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.622376 kubelet[3458]: E0129 12:01:32.620066 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.622376 kubelet[3458]: I0129 12:01:32.620093 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz6b\" (UniqueName: \"kubernetes.io/projected/b8235b86-2d3c-40e3-bcdb-97985970fdbe-kube-api-access-6wz6b\") pod \"csi-node-driver-qjbtv\" (UID: \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\") " pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:32.622855 kubelet[3458]: E0129 12:01:32.622582 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.622855 kubelet[3458]: W0129 12:01:32.622599 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.622855 kubelet[3458]: E0129 12:01:32.622621 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.623793 kubelet[3458]: E0129 12:01:32.623292 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.623793 kubelet[3458]: W0129 12:01:32.623309 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.623793 kubelet[3458]: E0129 12:01:32.623392 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.623793 kubelet[3458]: E0129 12:01:32.623757 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.623793 kubelet[3458]: W0129 12:01:32.623770 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.623793 kubelet[3458]: E0129 12:01:32.623786 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.624274 kubelet[3458]: E0129 12:01:32.624250 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.624274 kubelet[3458]: W0129 12:01:32.624266 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.624375 kubelet[3458]: E0129 12:01:32.624283 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.650808 containerd[1818]: time="2025-01-29T12:01:32.650500141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:32.650808 containerd[1818]: time="2025-01-29T12:01:32.650561343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:32.650808 containerd[1818]: time="2025-01-29T12:01:32.650601844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.650808 containerd[1818]: time="2025-01-29T12:01:32.650705746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.692686 containerd[1818]: time="2025-01-29T12:01:32.692631958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5mwpl,Uid:61c07a8a-f154-4bb5-9a31-5f0a71d7b709,Namespace:calico-system,Attempt:0,} returns sandbox id \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\"" Jan 29 12:01:32.721027 kubelet[3458]: E0129 12:01:32.720900 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.721027 kubelet[3458]: W0129 12:01:32.720934 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.721027 kubelet[3458]: E0129 12:01:32.720961 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.721508 kubelet[3458]: E0129 12:01:32.721299 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.721508 kubelet[3458]: W0129 12:01:32.721316 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.721508 kubelet[3458]: E0129 12:01:32.721368 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.721775 kubelet[3458]: E0129 12:01:32.721739 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.721775 kubelet[3458]: W0129 12:01:32.721754 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.721927 kubelet[3458]: E0129 12:01:32.721791 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.722122 kubelet[3458]: E0129 12:01:32.722093 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.722122 kubelet[3458]: W0129 12:01:32.722114 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.722356 kubelet[3458]: E0129 12:01:32.722136 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.722594 kubelet[3458]: E0129 12:01:32.722499 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.722594 kubelet[3458]: W0129 12:01:32.722523 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.722594 kubelet[3458]: E0129 12:01:32.722543 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.723051 kubelet[3458]: E0129 12:01:32.722922 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.723051 kubelet[3458]: W0129 12:01:32.722934 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.723051 kubelet[3458]: E0129 12:01:32.722958 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.723294 kubelet[3458]: E0129 12:01:32.723217 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.723294 kubelet[3458]: W0129 12:01:32.723228 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.723502 kubelet[3458]: E0129 12:01:32.723321 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.723502 kubelet[3458]: E0129 12:01:32.723531 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.723738 kubelet[3458]: W0129 12:01:32.723542 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.723738 kubelet[3458]: E0129 12:01:32.723560 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.723957 kubelet[3458]: E0129 12:01:32.723765 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.723957 kubelet[3458]: W0129 12:01:32.723776 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.723957 kubelet[3458]: E0129 12:01:32.723792 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.724093 kubelet[3458]: E0129 12:01:32.723997 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.724093 kubelet[3458]: W0129 12:01:32.724008 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.724093 kubelet[3458]: E0129 12:01:32.724024 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.724468 kubelet[3458]: E0129 12:01:32.724448 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.724468 kubelet[3458]: W0129 12:01:32.724464 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.724708 kubelet[3458]: E0129 12:01:32.724491 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.724766 kubelet[3458]: E0129 12:01:32.724735 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.724766 kubelet[3458]: W0129 12:01:32.724746 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.724850 kubelet[3458]: E0129 12:01:32.724774 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.725063 kubelet[3458]: E0129 12:01:32.725045 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.725063 kubelet[3458]: W0129 12:01:32.725058 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.725193 kubelet[3458]: E0129 12:01:32.725143 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.725366 kubelet[3458]: E0129 12:01:32.725349 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.725366 kubelet[3458]: W0129 12:01:32.725362 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.725630 kubelet[3458]: E0129 12:01:32.725470 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.725630 kubelet[3458]: E0129 12:01:32.725567 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.725630 kubelet[3458]: W0129 12:01:32.725579 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.725630 kubelet[3458]: E0129 12:01:32.725598 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.725841 kubelet[3458]: E0129 12:01:32.725825 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.725841 kubelet[3458]: W0129 12:01:32.725839 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.725932 kubelet[3458]: E0129 12:01:32.725862 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.726741 kubelet[3458]: E0129 12:01:32.726066 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.726825 kubelet[3458]: W0129 12:01:32.726781 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.726825 kubelet[3458]: E0129 12:01:32.726813 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.727087 kubelet[3458]: E0129 12:01:32.727066 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.727087 kubelet[3458]: W0129 12:01:32.727082 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.727316 kubelet[3458]: E0129 12:01:32.727166 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.727944 kubelet[3458]: E0129 12:01:32.727922 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.727944 kubelet[3458]: W0129 12:01:32.727941 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728130 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729220 kubelet[3458]: W0129 12:01:32.728142 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728156 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728389 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729220 kubelet[3458]: W0129 12:01:32.728400 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728413 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728580 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728710 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729220 kubelet[3458]: W0129 12:01:32.728721 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729220 kubelet[3458]: E0129 12:01:32.728749 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729008 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729654 kubelet[3458]: W0129 12:01:32.729019 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729032 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729221 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729654 kubelet[3458]: W0129 12:01:32.729231 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729243 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729512 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.729654 kubelet[3458]: W0129 12:01:32.729524 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.729654 kubelet[3458]: E0129 12:01:32.729537 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:32.747308 kubelet[3458]: E0129 12:01:32.747274 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:32.747308 kubelet[3458]: W0129 12:01:32.747303 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:32.747664 kubelet[3458]: E0129 12:01:32.747331 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:33.670429 kubelet[3458]: E0129 12:01:33.670365 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:34.239764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545341507.mount: Deactivated successfully. Jan 29 12:01:35.433455 containerd[1818]: time="2025-01-29T12:01:35.433371577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:35.436676 containerd[1818]: time="2025-01-29T12:01:35.436615655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:01:35.441775 containerd[1818]: time="2025-01-29T12:01:35.440440447Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:35.448595 containerd[1818]: time="2025-01-29T12:01:35.448544243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:35.450531 containerd[1818]: time="2025-01-29T12:01:35.450459789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.89459273s" Jan 29 12:01:35.450731 containerd[1818]: time="2025-01-29T12:01:35.450710495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:01:35.454075 containerd[1818]: time="2025-01-29T12:01:35.453963374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:01:35.471481 containerd[1818]: time="2025-01-29T12:01:35.470182665Z" level=info msg="CreateContainer within sandbox \"309ea34dafb8f86906190b9076a3e2e5b89d37ad99c049c6a4ca05c9e733473a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:01:35.540863 containerd[1818]: time="2025-01-29T12:01:35.540802269Z" level=info msg="CreateContainer within sandbox \"309ea34dafb8f86906190b9076a3e2e5b89d37ad99c049c6a4ca05c9e733473a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"07182d4f6be563089d2f783377db747de25b72a725808b49c232d88b2f1f28b7\"" Jan 29 12:01:35.542932 containerd[1818]: time="2025-01-29T12:01:35.542790316Z" level=info msg="StartContainer for \"07182d4f6be563089d2f783377db747de25b72a725808b49c232d88b2f1f28b7\"" Jan 29 12:01:35.644364 containerd[1818]: time="2025-01-29T12:01:35.643936757Z" level=info msg="StartContainer for \"07182d4f6be563089d2f783377db747de25b72a725808b49c232d88b2f1f28b7\" returns successfully" Jan 29 12:01:35.671706 kubelet[3458]: E0129 12:01:35.670223 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:35.836450 kubelet[3458]: E0129 12:01:35.836267 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.836450 kubelet[3458]: W0129 12:01:35.836301 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.836450 kubelet[3458]: E0129 12:01:35.836334 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.836728 kubelet[3458]: E0129 12:01:35.836701 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.836728 kubelet[3458]: W0129 12:01:35.836715 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.836854 kubelet[3458]: E0129 12:01:35.836734 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.837667 kubelet[3458]: E0129 12:01:35.836943 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.837667 kubelet[3458]: W0129 12:01:35.836955 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.837667 kubelet[3458]: E0129 12:01:35.836968 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.837667 kubelet[3458]: E0129 12:01:35.837360 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.837667 kubelet[3458]: W0129 12:01:35.837372 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.837989 kubelet[3458]: E0129 12:01:35.837736 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.838112 kubelet[3458]: E0129 12:01:35.838096 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.838182 kubelet[3458]: W0129 12:01:35.838112 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.838182 kubelet[3458]: E0129 12:01:35.838127 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.838666 kubelet[3458]: E0129 12:01:35.838644 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.838666 kubelet[3458]: W0129 12:01:35.838663 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.838815 kubelet[3458]: E0129 12:01:35.838678 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.839213 kubelet[3458]: E0129 12:01:35.838896 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.839213 kubelet[3458]: W0129 12:01:35.838908 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.839213 kubelet[3458]: E0129 12:01:35.838921 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.839213 kubelet[3458]: E0129 12:01:35.839110 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.839213 kubelet[3458]: W0129 12:01:35.839120 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.839213 kubelet[3458]: E0129 12:01:35.839132 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.839342 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.841542 kubelet[3458]: W0129 12:01:35.839352 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.839364 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.839589 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.841542 kubelet[3458]: W0129 12:01:35.839600 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.839612 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.840094 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.841542 kubelet[3458]: W0129 12:01:35.840108 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.840123 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.841542 kubelet[3458]: E0129 12:01:35.840518 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.842071 kubelet[3458]: W0129 12:01:35.840533 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.840547 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.840827 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.842071 kubelet[3458]: W0129 12:01:35.840838 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.840850 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.841057 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.842071 kubelet[3458]: W0129 12:01:35.841067 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.841078 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.842071 kubelet[3458]: E0129 12:01:35.841275 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.842071 kubelet[3458]: W0129 12:01:35.841288 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.842591 kubelet[3458]: E0129 12:01:35.841300 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.845961 kubelet[3458]: E0129 12:01:35.845833 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.845961 kubelet[3458]: W0129 12:01:35.845850 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.845961 kubelet[3458]: E0129 12:01:35.845867 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.847445 kubelet[3458]: E0129 12:01:35.847227 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.847445 kubelet[3458]: W0129 12:01:35.847242 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.847445 kubelet[3458]: E0129 12:01:35.847274 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.847721 kubelet[3458]: E0129 12:01:35.847696 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.847783 kubelet[3458]: W0129 12:01:35.847731 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.847783 kubelet[3458]: E0129 12:01:35.847749 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.848351 kubelet[3458]: E0129 12:01:35.848009 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.848351 kubelet[3458]: W0129 12:01:35.848023 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.848351 kubelet[3458]: E0129 12:01:35.848038 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.848767 kubelet[3458]: E0129 12:01:35.848737 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.848862 kubelet[3458]: W0129 12:01:35.848754 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.849468 kubelet[3458]: E0129 12:01:35.849145 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.849468 kubelet[3458]: W0129 12:01:35.849159 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.849468 kubelet[3458]: E0129 12:01:35.849316 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.849468 kubelet[3458]: E0129 12:01:35.849383 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.849746 kubelet[3458]: E0129 12:01:35.849731 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.849746 kubelet[3458]: W0129 12:01:35.849743 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.849915 kubelet[3458]: E0129 12:01:35.849782 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.850059 kubelet[3458]: E0129 12:01:35.850047 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.850110 kubelet[3458]: W0129 12:01:35.850062 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.850110 kubelet[3458]: E0129 12:01:35.850099 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.850333 kubelet[3458]: E0129 12:01:35.850318 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.850333 kubelet[3458]: W0129 12:01:35.850332 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.850613 kubelet[3458]: E0129 12:01:35.850352 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.850670 kubelet[3458]: E0129 12:01:35.850626 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.850670 kubelet[3458]: W0129 12:01:35.850638 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.850905 kubelet[3458]: E0129 12:01:35.850882 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.850905 kubelet[3458]: W0129 12:01:35.850900 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.851186 kubelet[3458]: E0129 12:01:35.850882 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.851327 kubelet[3458]: E0129 12:01:35.851312 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.851800 kubelet[3458]: E0129 12:01:35.851336 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.851800 kubelet[3458]: W0129 12:01:35.851548 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.851800 kubelet[3458]: E0129 12:01:35.851564 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.852168 kubelet[3458]: E0129 12:01:35.852100 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.852168 kubelet[3458]: W0129 12:01:35.852117 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.852168 kubelet[3458]: E0129 12:01:35.852136 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.852439 kubelet[3458]: E0129 12:01:35.852390 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.852439 kubelet[3458]: W0129 12:01:35.852401 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.853842 kubelet[3458]: E0129 12:01:35.852472 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.854009 kubelet[3458]: E0129 12:01:35.853993 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.854090 kubelet[3458]: W0129 12:01:35.854008 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.854137 kubelet[3458]: E0129 12:01:35.854096 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.854534 kubelet[3458]: E0129 12:01:35.854517 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.854534 kubelet[3458]: W0129 12:01:35.854530 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.855388 kubelet[3458]: E0129 12:01:35.854679 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.855388 kubelet[3458]: E0129 12:01:35.854910 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.855388 kubelet[3458]: W0129 12:01:35.854921 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.855388 kubelet[3458]: E0129 12:01:35.854967 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:35.855388 kubelet[3458]: E0129 12:01:35.855208 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:35.855388 kubelet[3458]: W0129 12:01:35.855218 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:35.855388 kubelet[3458]: E0129 12:01:35.855246 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.783833 kubelet[3458]: I0129 12:01:36.783801 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:01:36.850359 kubelet[3458]: E0129 12:01:36.850056 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.850359 kubelet[3458]: W0129 12:01:36.850087 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.850359 kubelet[3458]: E0129 12:01:36.850119 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.850867 kubelet[3458]: E0129 12:01:36.850638 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.851477 kubelet[3458]: W0129 12:01:36.850967 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.851477 kubelet[3458]: E0129 12:01:36.850995 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.851477 kubelet[3458]: E0129 12:01:36.851363 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.851477 kubelet[3458]: W0129 12:01:36.851388 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.851477 kubelet[3458]: E0129 12:01:36.851406 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.852327 kubelet[3458]: E0129 12:01:36.852149 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.852327 kubelet[3458]: W0129 12:01:36.852164 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.852327 kubelet[3458]: E0129 12:01:36.852177 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.853066 kubelet[3458]: E0129 12:01:36.852746 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.853151 kubelet[3458]: W0129 12:01:36.853130 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.853234 kubelet[3458]: E0129 12:01:36.853153 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.853689 kubelet[3458]: E0129 12:01:36.853466 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.853689 kubelet[3458]: W0129 12:01:36.853482 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.853689 kubelet[3458]: E0129 12:01:36.853498 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.853917 kubelet[3458]: E0129 12:01:36.853900 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.853917 kubelet[3458]: W0129 12:01:36.853914 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.854019 kubelet[3458]: E0129 12:01:36.853928 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.854375 kubelet[3458]: E0129 12:01:36.854293 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.854375 kubelet[3458]: W0129 12:01:36.854307 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.854375 kubelet[3458]: E0129 12:01:36.854320 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.854856 kubelet[3458]: E0129 12:01:36.854834 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.854856 kubelet[3458]: W0129 12:01:36.854850 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.854975 kubelet[3458]: E0129 12:01:36.854865 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.855394 kubelet[3458]: E0129 12:01:36.855320 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.855394 kubelet[3458]: W0129 12:01:36.855334 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.855515 kubelet[3458]: E0129 12:01:36.855451 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.856023 kubelet[3458]: E0129 12:01:36.856004 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.856023 kubelet[3458]: W0129 12:01:36.856020 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.856242 kubelet[3458]: E0129 12:01:36.856035 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.856302 kubelet[3458]: E0129 12:01:36.856247 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.856302 kubelet[3458]: W0129 12:01:36.856258 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.856302 kubelet[3458]: E0129 12:01:36.856272 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.856617 kubelet[3458]: E0129 12:01:36.856588 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.856617 kubelet[3458]: W0129 12:01:36.856601 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.856617 kubelet[3458]: E0129 12:01:36.856614 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.857012 kubelet[3458]: E0129 12:01:36.856996 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.857012 kubelet[3458]: W0129 12:01:36.857009 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.857134 kubelet[3458]: E0129 12:01:36.857023 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.857455 kubelet[3458]: E0129 12:01:36.857441 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.857536 kubelet[3458]: W0129 12:01:36.857457 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.857536 kubelet[3458]: E0129 12:01:36.857471 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.858058 kubelet[3458]: E0129 12:01:36.858037 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.858058 kubelet[3458]: W0129 12:01:36.858053 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.858183 kubelet[3458]: E0129 12:01:36.858067 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.858800 kubelet[3458]: E0129 12:01:36.858783 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.858800 kubelet[3458]: W0129 12:01:36.858799 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.858909 kubelet[3458]: E0129 12:01:36.858851 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.859205 kubelet[3458]: E0129 12:01:36.859182 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.859205 kubelet[3458]: W0129 12:01:36.859197 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.859451 kubelet[3458]: E0129 12:01:36.859326 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.859780 kubelet[3458]: E0129 12:01:36.859763 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.859851 kubelet[3458]: W0129 12:01:36.859782 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.859892 kubelet[3458]: E0129 12:01:36.859856 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.860314 kubelet[3458]: E0129 12:01:36.860295 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.860314 kubelet[3458]: W0129 12:01:36.860313 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.860471 kubelet[3458]: E0129 12:01:36.860397 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.860723 kubelet[3458]: E0129 12:01:36.860609 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.860723 kubelet[3458]: W0129 12:01:36.860623 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.860723 kubelet[3458]: E0129 12:01:36.860705 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.861055 kubelet[3458]: E0129 12:01:36.860881 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.861055 kubelet[3458]: W0129 12:01:36.860893 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.861055 kubelet[3458]: E0129 12:01:36.860910 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.861205 kubelet[3458]: E0129 12:01:36.861162 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.861205 kubelet[3458]: W0129 12:01:36.861174 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.861205 kubelet[3458]: E0129 12:01:36.861197 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.861538 kubelet[3458]: E0129 12:01:36.861405 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.861538 kubelet[3458]: W0129 12:01:36.861444 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.861538 kubelet[3458]: E0129 12:01:36.861464 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.861880 kubelet[3458]: E0129 12:01:36.861751 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.861880 kubelet[3458]: W0129 12:01:36.861765 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.862590 kubelet[3458]: E0129 12:01:36.862556 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.862884 kubelet[3458]: E0129 12:01:36.862869 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.862884 kubelet[3458]: W0129 12:01:36.862881 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.863009 kubelet[3458]: E0129 12:01:36.862970 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.863282 kubelet[3458]: E0129 12:01:36.863164 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.863282 kubelet[3458]: W0129 12:01:36.863177 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.863282 kubelet[3458]: E0129 12:01:36.863210 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.863531 kubelet[3458]: E0129 12:01:36.863497 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.863531 kubelet[3458]: W0129 12:01:36.863513 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.863649 kubelet[3458]: E0129 12:01:36.863534 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.863776 kubelet[3458]: E0129 12:01:36.863765 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.863827 kubelet[3458]: W0129 12:01:36.863778 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.863827 kubelet[3458]: E0129 12:01:36.863803 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.864239 kubelet[3458]: E0129 12:01:36.864064 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.864239 kubelet[3458]: W0129 12:01:36.864082 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.864239 kubelet[3458]: E0129 12:01:36.864104 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.864570 kubelet[3458]: E0129 12:01:36.864552 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.864570 kubelet[3458]: W0129 12:01:36.864566 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.864807 kubelet[3458]: E0129 12:01:36.864585 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.864984 kubelet[3458]: E0129 12:01:36.864962 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.865070 kubelet[3458]: W0129 12:01:36.864985 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.865070 kubelet[3458]: E0129 12:01:36.865004 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.865293 kubelet[3458]: E0129 12:01:36.865279 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:01:36.865293 kubelet[3458]: W0129 12:01:36.865291 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:01:36.865379 kubelet[3458]: E0129 12:01:36.865312 3458 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:01:36.894408 containerd[1818]: time="2025-01-29T12:01:36.894341595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:36.897166 containerd[1818]: time="2025-01-29T12:01:36.896902355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:01:36.903481 containerd[1818]: time="2025-01-29T12:01:36.901607467Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:36.908566 containerd[1818]: time="2025-01-29T12:01:36.908505031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:36.909415 containerd[1818]: time="2025-01-29T12:01:36.909366151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.455163172s" Jan 29 12:01:36.909415 containerd[1818]: time="2025-01-29T12:01:36.909410352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:01:36.913092 containerd[1818]: time="2025-01-29T12:01:36.912888835Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:01:36.954842 containerd[1818]: time="2025-01-29T12:01:36.954783628Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\"" Jan 29 12:01:36.955650 containerd[1818]: time="2025-01-29T12:01:36.955604648Z" level=info msg="StartContainer for \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\"" Jan 29 12:01:37.035011 containerd[1818]: time="2025-01-29T12:01:37.034347015Z" level=info msg="StartContainer for \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\" returns successfully" Jan 29 12:01:37.105173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2-rootfs.mount: Deactivated successfully. Jan 29 12:01:37.669937 kubelet[3458]: E0129 12:01:37.669858 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:38.447597 kubelet[3458]: I0129 12:01:37.811403 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76cbb864cc-2rggj" podStartSLOduration=3.912509111 podStartE2EDuration="6.811375344s" podCreationTimestamp="2025-01-29 12:01:31 +0000 UTC" firstStartedPulling="2025-01-29 12:01:32.553681306 +0000 UTC m=+22.545997343" lastFinishedPulling="2025-01-29 12:01:35.452547639 +0000 UTC m=+25.444863576" observedRunningTime="2025-01-29 12:01:35.804347826 +0000 UTC m=+25.796663863" watchObservedRunningTime="2025-01-29 12:01:37.811375344 +0000 UTC m=+27.803691381" Jan 29 12:01:38.468041 containerd[1818]: time="2025-01-29T12:01:38.467955715Z" level=info msg="shim disconnected" id=92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2 namespace=k8s.io Jan 29 12:01:38.468041 containerd[1818]: time="2025-01-29T12:01:38.468041418Z" level=warning msg="cleaning up after shim disconnected" id=92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2 namespace=k8s.io Jan 29 12:01:38.468041 containerd[1818]: time="2025-01-29T12:01:38.468052418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:38.795411 containerd[1818]: time="2025-01-29T12:01:38.795115075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:01:39.670030 kubelet[3458]: E0129 12:01:39.669938 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:41.670105 kubelet[3458]: E0129 12:01:41.670024 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:43.670939 kubelet[3458]: E0129 12:01:43.670717 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:44.164323 containerd[1818]: time="2025-01-29T12:01:44.164260012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:44.166839 containerd[1818]: time="2025-01-29T12:01:44.166757872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:01:44.170982 containerd[1818]: time="2025-01-29T12:01:44.170892470Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:44.175850 containerd[1818]: time="2025-01-29T12:01:44.175770685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:44.177018 containerd[1818]: time="2025-01-29T12:01:44.176536504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.381369627s" Jan 29 12:01:44.177018 containerd[1818]: time="2025-01-29T12:01:44.176582805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:01:44.180704 containerd[1818]: time="2025-01-29T12:01:44.180524998Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:01:44.219107 containerd[1818]: time="2025-01-29T12:01:44.218756505Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\"" Jan 29 12:01:44.220783 containerd[1818]: time="2025-01-29T12:01:44.220752952Z" level=info msg="StartContainer for \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\"" Jan 29 12:01:44.294024 containerd[1818]: time="2025-01-29T12:01:44.293975389Z" level=info msg="StartContainer for \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\" returns successfully" Jan 29 12:01:45.671535 kubelet[3458]: E0129 12:01:45.670638 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:45.862471 containerd[1818]: time="2025-01-29T12:01:45.862172565Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 29 12:01:45.888568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005-rootfs.mount: Deactivated successfully. Jan 29 12:01:45.895035 kubelet[3458]: I0129 12:01:45.894968 3458 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:01:45.956487 kubelet[3458]: I0129 12:01:45.956304 3458 topology_manager.go:215] "Topology Admit Handler" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r59h8" Jan 29 12:01:45.987467 kubelet[3458]: I0129 12:01:45.986516 3458 topology_manager.go:215] "Topology Admit Handler" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" podNamespace="calico-system" podName="calico-kube-controllers-6564b8cd7d-xflv7" Jan 29 12:01:45.988913 kubelet[3458]: I0129 12:01:45.988873 3458 topology_manager.go:215] "Topology Admit Handler" podUID="2c4f2484-ef71-4219-a544-c54173b13527" podNamespace="calico-apiserver" podName="calico-apiserver-8cd77c95d-mx8dw" Jan 29 12:01:45.993074 kubelet[3458]: I0129 12:01:45.993039 3458 topology_manager.go:215] "Topology Admit Handler" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wqscz" Jan 29 12:01:45.993832 kubelet[3458]: I0129 12:01:45.993251 3458 topology_manager.go:215] "Topology Admit Handler" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" podNamespace="calico-apiserver" podName="calico-apiserver-8cd77c95d-fll92" Jan 29 12:01:46.126624 kubelet[3458]: I0129 12:01:46.126564 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c4f2484-ef71-4219-a544-c54173b13527-calico-apiserver-certs\") pod \"calico-apiserver-8cd77c95d-mx8dw\" (UID: \"2c4f2484-ef71-4219-a544-c54173b13527\") " pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" Jan 29 12:01:46.126624 kubelet[3458]: I0129 12:01:46.126619 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rd6p\" (UniqueName: \"kubernetes.io/projected/b50dd55e-6299-4267-acbd-6f981eec9b80-kube-api-access-2rd6p\") pod \"calico-kube-controllers-6564b8cd7d-xflv7\" (UID: \"b50dd55e-6299-4267-acbd-6f981eec9b80\") " pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" Jan 29 12:01:46.126943 kubelet[3458]: I0129 12:01:46.126657 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2cbf\" (UniqueName: \"kubernetes.io/projected/74643b96-e115-4d67-8115-6c9ce09d0502-kube-api-access-j2cbf\") pod \"calico-apiserver-8cd77c95d-fll92\" (UID: \"74643b96-e115-4d67-8115-6c9ce09d0502\") " pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" Jan 29 12:01:46.126943 kubelet[3458]: I0129 12:01:46.126691 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b50e5645-b858-440a-ac13-91ab3ef24687-config-volume\") pod \"coredns-7db6d8ff4d-wqscz\" (UID: \"b50e5645-b858-440a-ac13-91ab3ef24687\") " pod="kube-system/coredns-7db6d8ff4d-wqscz" Jan 29 12:01:46.126943 kubelet[3458]: I0129 12:01:46.126714 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8277e1b7-9bd3-41a3-9974-0a547b3a9790-config-volume\") pod \"coredns-7db6d8ff4d-r59h8\" (UID: \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\") " pod="kube-system/coredns-7db6d8ff4d-r59h8" Jan 29 12:01:46.126943 kubelet[3458]: I0129 12:01:46.126739 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrlr\" (UniqueName: \"kubernetes.io/projected/2c4f2484-ef71-4219-a544-c54173b13527-kube-api-access-tvrlr\") pod \"calico-apiserver-8cd77c95d-mx8dw\" (UID: \"2c4f2484-ef71-4219-a544-c54173b13527\") " pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" Jan 29 12:01:46.126943 kubelet[3458]: I0129 12:01:46.126762 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b50dd55e-6299-4267-acbd-6f981eec9b80-tigera-ca-bundle\") pod \"calico-kube-controllers-6564b8cd7d-xflv7\" (UID: \"b50dd55e-6299-4267-acbd-6f981eec9b80\") " pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" Jan 29 12:01:46.127123 kubelet[3458]: I0129 12:01:46.126790 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvh6\" (UniqueName: \"kubernetes.io/projected/b50e5645-b858-440a-ac13-91ab3ef24687-kube-api-access-hfvh6\") pod \"coredns-7db6d8ff4d-wqscz\" (UID: \"b50e5645-b858-440a-ac13-91ab3ef24687\") " pod="kube-system/coredns-7db6d8ff4d-wqscz" Jan 29 12:01:46.127123 kubelet[3458]: I0129 12:01:46.126812 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74643b96-e115-4d67-8115-6c9ce09d0502-calico-apiserver-certs\") pod \"calico-apiserver-8cd77c95d-fll92\" (UID: \"74643b96-e115-4d67-8115-6c9ce09d0502\") " pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" Jan 29 12:01:46.127123 kubelet[3458]: I0129 12:01:46.126840 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzckr\" (UniqueName: \"kubernetes.io/projected/8277e1b7-9bd3-41a3-9974-0a547b3a9790-kube-api-access-rzckr\") pod \"coredns-7db6d8ff4d-r59h8\" (UID: \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\") " pod="kube-system/coredns-7db6d8ff4d-r59h8" Jan 29 12:01:46.302587 containerd[1818]: time="2025-01-29T12:01:46.301096432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6564b8cd7d-xflv7,Uid:b50dd55e-6299-4267-acbd-6f981eec9b80,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:46.311129 containerd[1818]: time="2025-01-29T12:01:46.311073854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-mx8dw,Uid:2c4f2484-ef71-4219-a544-c54173b13527,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:01:47.505574 containerd[1818]: time="2025-01-29T12:01:47.504884818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-fll92,Uid:74643b96-e115-4d67-8115-6c9ce09d0502,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:01:47.505574 containerd[1818]: time="2025-01-29T12:01:47.504974420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r59h8,Uid:8277e1b7-9bd3-41a3-9974-0a547b3a9790,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:47.505574 containerd[1818]: time="2025-01-29T12:01:47.505333728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqscz,Uid:b50e5645-b858-440a-ac13-91ab3ef24687,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:47.543611 containerd[1818]: time="2025-01-29T12:01:47.543526178Z" level=info msg="shim disconnected" id=f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005 namespace=k8s.io Jan 29 12:01:47.543611 containerd[1818]: time="2025-01-29T12:01:47.543602579Z" level=warning msg="cleaning up after shim disconnected" id=f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005 namespace=k8s.io Jan 29 12:01:47.543611 containerd[1818]: time="2025-01-29T12:01:47.543614980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:47.672887 containerd[1818]: time="2025-01-29T12:01:47.672835655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjbtv,Uid:b8235b86-2d3c-40e3-bcdb-97985970fdbe,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:47.829147 containerd[1818]: time="2025-01-29T12:01:47.828715524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:01:47.973197 containerd[1818]: time="2025-01-29T12:01:47.972601425Z" level=error msg="Failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.978412 containerd[1818]: time="2025-01-29T12:01:47.977873543Z" level=error msg="encountered an error cleaning up failed sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.978868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f-shm.mount: Deactivated successfully. Jan 29 12:01:47.979800 containerd[1818]: time="2025-01-29T12:01:47.979566280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6564b8cd7d-xflv7,Uid:b50dd55e-6299-4267-acbd-6f981eec9b80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.981546 kubelet[3458]: E0129 12:01:47.980260 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.981546 kubelet[3458]: E0129 12:01:47.980351 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" Jan 29 12:01:47.981546 kubelet[3458]: E0129 12:01:47.980379 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" Jan 29 12:01:47.982129 kubelet[3458]: E0129 12:01:47.980477 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6564b8cd7d-xflv7_calico-system(b50dd55e-6299-4267-acbd-6f981eec9b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6564b8cd7d-xflv7_calico-system(b50dd55e-6299-4267-acbd-6f981eec9b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" Jan 29 12:01:47.996451 containerd[1818]: time="2025-01-29T12:01:47.995811542Z" level=error msg="Failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.998524 containerd[1818]: time="2025-01-29T12:01:47.997512880Z" level=error msg="encountered an error cleaning up failed sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.998524 containerd[1818]: time="2025-01-29T12:01:47.997602982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-fll92,Uid:74643b96-e115-4d67-8115-6c9ce09d0502,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:47.999737 kubelet[3458]: E0129 12:01:47.999693 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.001187 kubelet[3458]: E0129 12:01:48.001101 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" Jan 29 12:01:48.001187 kubelet[3458]: E0129 12:01:48.001156 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" Jan 29 12:01:48.003199 kubelet[3458]: E0129 12:01:48.002070 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd77c95d-fll92_calico-apiserver(74643b96-e115-4d67-8115-6c9ce09d0502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd77c95d-fll92_calico-apiserver(74643b96-e115-4d67-8115-6c9ce09d0502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" Jan 29 12:01:48.002610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864-shm.mount: Deactivated successfully. Jan 29 12:01:48.005595 containerd[1818]: time="2025-01-29T12:01:48.005552959Z" level=error msg="Failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.010887 containerd[1818]: time="2025-01-29T12:01:48.010723774Z" level=error msg="encountered an error cleaning up failed sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.010887 containerd[1818]: time="2025-01-29T12:01:48.010811676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqscz,Uid:b50e5645-b858-440a-ac13-91ab3ef24687,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.012919 kubelet[3458]: E0129 12:01:48.012818 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.012930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c-shm.mount: Deactivated successfully. Jan 29 12:01:48.014427 kubelet[3458]: E0129 12:01:48.012995 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wqscz" Jan 29 12:01:48.014427 kubelet[3458]: E0129 12:01:48.013391 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wqscz" Jan 29 12:01:48.014427 kubelet[3458]: E0129 12:01:48.013473 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wqscz_kube-system(b50e5645-b858-440a-ac13-91ab3ef24687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wqscz_kube-system(b50e5645-b858-440a-ac13-91ab3ef24687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wqscz" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" Jan 29 12:01:48.037445 containerd[1818]: time="2025-01-29T12:01:48.036624050Z" level=error msg="Failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.037445 containerd[1818]: time="2025-01-29T12:01:48.037092060Z" level=error msg="encountered an error cleaning up failed sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.037445 containerd[1818]: time="2025-01-29T12:01:48.037173862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r59h8,Uid:8277e1b7-9bd3-41a3-9974-0a547b3a9790,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.041380 kubelet[3458]: E0129 12:01:48.039819 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.041380 kubelet[3458]: E0129 12:01:48.039935 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r59h8" Jan 29 12:01:48.041380 kubelet[3458]: E0129 12:01:48.039987 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r59h8" Jan 29 12:01:48.041671 kubelet[3458]: E0129 12:01:48.040128 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r59h8_kube-system(8277e1b7-9bd3-41a3-9974-0a547b3a9790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r59h8_kube-system(8277e1b7-9bd3-41a3-9974-0a547b3a9790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r59h8" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" Jan 29 12:01:48.044322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd-shm.mount: Deactivated successfully. Jan 29 12:01:48.053336 containerd[1818]: time="2025-01-29T12:01:48.053281721Z" level=error msg="Failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.053911 containerd[1818]: time="2025-01-29T12:01:48.053870334Z" level=error msg="encountered an error cleaning up failed sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.054127 containerd[1818]: time="2025-01-29T12:01:48.054067738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjbtv,Uid:b8235b86-2d3c-40e3-bcdb-97985970fdbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055101 kubelet[3458]: E0129 12:01:48.054539 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055101 kubelet[3458]: E0129 12:01:48.054631 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:48.055101 kubelet[3458]: E0129 12:01:48.054664 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjbtv" Jan 29 12:01:48.055311 containerd[1818]: time="2025-01-29T12:01:48.054583150Z" level=error msg="Failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055311 containerd[1818]: time="2025-01-29T12:01:48.054932257Z" level=error msg="encountered an error cleaning up failed sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055311 containerd[1818]: time="2025-01-29T12:01:48.055007959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-mx8dw,Uid:2c4f2484-ef71-4219-a544-c54173b13527,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055514 kubelet[3458]: E0129 12:01:48.054747 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjbtv_calico-system(b8235b86-2d3c-40e3-bcdb-97985970fdbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjbtv_calico-system(b8235b86-2d3c-40e3-bcdb-97985970fdbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:48.055849 kubelet[3458]: E0129 12:01:48.055711 3458 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.055849 kubelet[3458]: E0129 12:01:48.055791 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" Jan 29 12:01:48.055849 kubelet[3458]: E0129 12:01:48.055819 3458 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" Jan 29 12:01:48.056205 kubelet[3458]: E0129 12:01:48.056116 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd77c95d-mx8dw_calico-apiserver(2c4f2484-ef71-4219-a544-c54173b13527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd77c95d-mx8dw_calico-apiserver(2c4f2484-ef71-4219-a544-c54173b13527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podUID="2c4f2484-ef71-4219-a544-c54173b13527" Jan 29 12:01:48.830230 kubelet[3458]: I0129 12:01:48.830188 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:01:48.833163 containerd[1818]: time="2025-01-29T12:01:48.831660041Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:01:48.833163 containerd[1818]: time="2025-01-29T12:01:48.831933447Z" level=info msg="Ensure that sandbox 94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4 in task-service has been cleanup successfully" Jan 29 12:01:48.837182 kubelet[3458]: I0129 12:01:48.837142 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:01:48.838935 containerd[1818]: time="2025-01-29T12:01:48.838897802Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:01:48.839195 containerd[1818]: time="2025-01-29T12:01:48.839146607Z" level=info msg="Ensure that sandbox 61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864 in task-service has been cleanup successfully" Jan 29 12:01:48.843523 kubelet[3458]: I0129 12:01:48.843479 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:01:48.846781 containerd[1818]: time="2025-01-29T12:01:48.846732476Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:01:48.847224 containerd[1818]: time="2025-01-29T12:01:48.847184586Z" level=info msg="Ensure that sandbox 3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f in task-service has been cleanup successfully" Jan 29 12:01:48.852014 kubelet[3458]: I0129 12:01:48.851821 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:01:48.856531 containerd[1818]: time="2025-01-29T12:01:48.855983382Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:01:48.856531 containerd[1818]: time="2025-01-29T12:01:48.856211287Z" level=info msg="Ensure that sandbox 325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f in task-service has been cleanup successfully" Jan 29 12:01:48.862734 kubelet[3458]: I0129 12:01:48.862663 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:01:48.867778 containerd[1818]: time="2025-01-29T12:01:48.867722143Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:01:48.868106 containerd[1818]: time="2025-01-29T12:01:48.868071051Z" level=info msg="Ensure that sandbox 99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c in task-service has been cleanup successfully" Jan 29 12:01:48.877541 kubelet[3458]: I0129 12:01:48.876011 3458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:01:48.877739 containerd[1818]: time="2025-01-29T12:01:48.876850746Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:01:48.877739 containerd[1818]: time="2025-01-29T12:01:48.877095952Z" level=info msg="Ensure that sandbox 5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd in task-service has been cleanup successfully" Jan 29 12:01:48.898178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4-shm.mount: Deactivated successfully. Jan 29 12:01:48.898408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f-shm.mount: Deactivated successfully. Jan 29 12:01:48.977671 containerd[1818]: time="2025-01-29T12:01:48.977595588Z" level=error msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" failed" error="failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.977997 kubelet[3458]: E0129 12:01:48.977944 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:01:48.978397 kubelet[3458]: E0129 12:01:48.978061 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864"} Jan 29 12:01:48.978509 kubelet[3458]: E0129 12:01:48.978454 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:48.978619 kubelet[3458]: E0129 12:01:48.978520 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" Jan 29 12:01:48.991874 containerd[1818]: time="2025-01-29T12:01:48.991795404Z" level=error msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" failed" error="failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:48.996329 kubelet[3458]: E0129 12:01:48.995539 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:01:48.996329 kubelet[3458]: E0129 12:01:48.995632 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f"} Jan 29 12:01:48.996329 kubelet[3458]: E0129 12:01:48.995688 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:48.996329 kubelet[3458]: E0129 12:01:48.995722 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podUID="2c4f2484-ef71-4219-a544-c54173b13527" Jan 29 12:01:49.032451 containerd[1818]: time="2025-01-29T12:01:49.031890596Z" level=error msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" failed" error="failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:49.032655 kubelet[3458]: E0129 12:01:49.032270 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:01:49.032655 kubelet[3458]: E0129 12:01:49.032336 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4"} Jan 29 12:01:49.032655 kubelet[3458]: E0129 12:01:49.032382 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:49.033264 kubelet[3458]: E0129 12:01:49.032823 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:01:49.045273 containerd[1818]: time="2025-01-29T12:01:49.045192392Z" level=error msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" failed" error="failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:49.045920 kubelet[3458]: E0129 12:01:49.045639 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:01:49.045920 kubelet[3458]: E0129 12:01:49.045768 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f"} Jan 29 12:01:49.045920 kubelet[3458]: E0129 12:01:49.045822 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:49.045920 kubelet[3458]: E0129 12:01:49.045854 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" Jan 29 12:01:49.053147 containerd[1818]: time="2025-01-29T12:01:49.052691659Z" level=error msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" failed" error="failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:49.053404 kubelet[3458]: E0129 12:01:49.052978 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:01:49.053404 kubelet[3458]: E0129 12:01:49.053023 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd"} Jan 29 12:01:49.053404 kubelet[3458]: E0129 12:01:49.053066 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:49.053404 kubelet[3458]: E0129 12:01:49.053100 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r59h8" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" Jan 29 12:01:49.053752 containerd[1818]: time="2025-01-29T12:01:49.053695982Z" level=error msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" failed" error="failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:49.053977 kubelet[3458]: E0129 12:01:49.053945 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:01:49.054065 kubelet[3458]: E0129 12:01:49.053986 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c"} Jan 29 12:01:49.054065 kubelet[3458]: E0129 12:01:49.054025 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:49.054065 kubelet[3458]: E0129 12:01:49.054053 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wqscz" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" Jan 29 12:01:56.063872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273480752.mount: Deactivated successfully. Jan 29 12:01:56.115917 containerd[1818]: time="2025-01-29T12:01:56.115827160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.119961 containerd[1818]: time="2025-01-29T12:01:56.119868856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:01:56.124552 containerd[1818]: time="2025-01-29T12:01:56.124464765Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.130273 containerd[1818]: time="2025-01-29T12:01:56.130185101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.131601 containerd[1818]: time="2025-01-29T12:01:56.131393230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.302623805s" Jan 29 12:01:56.131601 containerd[1818]: time="2025-01-29T12:01:56.131459832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:01:56.150836 containerd[1818]: time="2025-01-29T12:01:56.148379234Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:01:56.199147 containerd[1818]: time="2025-01-29T12:01:56.199091540Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338\"" Jan 29 12:01:56.200059 containerd[1818]: time="2025-01-29T12:01:56.200016262Z" level=info msg="StartContainer for \"9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338\"" Jan 29 12:01:56.263730 containerd[1818]: time="2025-01-29T12:01:56.263550173Z" level=info msg="StartContainer for \"9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338\" returns successfully" Jan 29 12:01:56.559372 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:01:56.559605 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:01:57.060003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338-rootfs.mount: Deactivated successfully. Jan 29 12:01:58.360547 kubelet[3458]: I0129 12:01:58.360163 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:01:58.379289 kubelet[3458]: I0129 12:01:58.379209 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5mwpl" podStartSLOduration=2.940658514 podStartE2EDuration="26.37918358s" podCreationTimestamp="2025-01-29 12:01:32 +0000 UTC" firstStartedPulling="2025-01-29 12:01:32.694147094 +0000 UTC m=+22.686463131" lastFinishedPulling="2025-01-29 12:01:56.13267226 +0000 UTC m=+46.124988197" observedRunningTime="2025-01-29 12:01:56.920875903 +0000 UTC m=+46.913191940" watchObservedRunningTime="2025-01-29 12:01:58.37918358 +0000 UTC m=+48.371499617" Jan 29 12:01:59.671926 containerd[1818]: time="2025-01-29T12:01:59.671481309Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:01:59.768485 containerd[1818]: time="2025-01-29T12:01:59.672538434Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:01:59.770582 containerd[1818]: time="2025-01-29T12:01:59.770081753Z" level=error msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" failed" error="failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:59.770705 kubelet[3458]: E0129 12:01:59.770371 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:01:59.770705 kubelet[3458]: E0129 12:01:59.770532 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f"} Jan 29 12:01:59.770705 kubelet[3458]: E0129 12:01:59.770653 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:59.771666 kubelet[3458]: E0129 12:01:59.770708 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podUID="2c4f2484-ef71-4219-a544-c54173b13527" Jan 29 12:01:59.771666 kubelet[3458]: E0129 12:01:59.771074 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:01:59.771666 kubelet[3458]: E0129 12:01:59.771120 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864"} Jan 29 12:01:59.771666 kubelet[3458]: E0129 12:01:59.771152 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:59.771813 containerd[1818]: time="2025-01-29T12:01:59.770888772Z" level=error msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" failed" error="failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:59.771847 kubelet[3458]: E0129 12:01:59.771177 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" Jan 29 12:01:59.811176 containerd[1818]: time="2025-01-29T12:01:59.811097129Z" level=info msg="shim disconnected" id=9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338 namespace=k8s.io Jan 29 12:01:59.811176 containerd[1818]: time="2025-01-29T12:01:59.811156430Z" level=warning msg="cleaning up after shim disconnected" id=9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338 namespace=k8s.io Jan 29 12:01:59.811176 containerd[1818]: time="2025-01-29T12:01:59.811169930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:59.910818 kubelet[3458]: I0129 12:01:59.910744 3458 scope.go:117] "RemoveContainer" containerID="9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338" Jan 29 12:01:59.915399 containerd[1818]: time="2025-01-29T12:01:59.914206380Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jan 29 12:01:59.951695 containerd[1818]: time="2025-01-29T12:01:59.951559069Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa\"" Jan 29 12:01:59.953111 containerd[1818]: time="2025-01-29T12:01:59.953055304Z" level=info msg="StartContainer for \"1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa\"" Jan 29 12:02:00.036085 containerd[1818]: time="2025-01-29T12:02:00.036018777Z" level=info msg="StartContainer for \"1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa\" returns successfully" Jan 29 12:02:00.136728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa-rootfs.mount: Deactivated successfully. Jan 29 12:02:00.148057 containerd[1818]: time="2025-01-29T12:02:00.147741034Z" level=info msg="shim disconnected" id=1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa namespace=k8s.io Jan 29 12:02:00.148057 containerd[1818]: time="2025-01-29T12:02:00.147821335Z" level=warning msg="cleaning up after shim disconnected" id=1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa namespace=k8s.io Jan 29 12:02:00.148057 containerd[1818]: time="2025-01-29T12:02:00.147833436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:00.672011 containerd[1818]: time="2025-01-29T12:02:00.671871869Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:02:00.710711 containerd[1818]: time="2025-01-29T12:02:00.710647461Z" level=error msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" failed" error="failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:00.711054 kubelet[3458]: E0129 12:02:00.711003 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:00.711214 kubelet[3458]: E0129 12:02:00.711078 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f"} Jan 29 12:02:00.711214 kubelet[3458]: E0129 12:02:00.711188 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:00.711333 kubelet[3458]: E0129 12:02:00.711224 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" Jan 29 12:02:00.914527 kubelet[3458]: I0129 12:02:00.914488 3458 scope.go:117] "RemoveContainer" containerID="9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338" Jan 29 12:02:00.915064 kubelet[3458]: I0129 12:02:00.915038 3458 scope.go:117] "RemoveContainer" containerID="1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa" Jan 29 12:02:00.916059 kubelet[3458]: E0129 12:02:00.916021 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-5mwpl_calico-system(61c07a8a-f154-4bb5-9a31-5f0a71d7b709)\"" pod="calico-system/calico-node-5mwpl" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" Jan 29 12:02:00.917903 containerd[1818]: time="2025-01-29T12:02:00.917628526Z" level=info msg="RemoveContainer for \"9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338\"" Jan 29 12:02:00.928339 containerd[1818]: time="2025-01-29T12:02:00.928173569Z" level=info msg="RemoveContainer for \"9e83c871823306576035de2add5d28f5d74c3dbe6de47ae54bf2f4f049fad338\" returns successfully" Jan 29 12:02:01.674358 containerd[1818]: time="2025-01-29T12:02:01.674155642Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:02:01.712708 containerd[1818]: time="2025-01-29T12:02:01.712638628Z" level=error msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" failed" error="failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:01.713106 kubelet[3458]: E0129 12:02:01.712941 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:01.713106 kubelet[3458]: E0129 12:02:01.713021 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c"} Jan 29 12:02:01.713106 kubelet[3458]: E0129 12:02:01.713069 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:01.713277 kubelet[3458]: E0129 12:02:01.713104 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wqscz" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" Jan 29 12:02:04.673053 containerd[1818]: time="2025-01-29T12:02:04.672637271Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:02:04.675911 containerd[1818]: time="2025-01-29T12:02:04.675331333Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:02:04.730951 containerd[1818]: time="2025-01-29T12:02:04.730879012Z" level=error msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" failed" error="failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:04.731348 kubelet[3458]: E0129 12:02:04.731289 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:04.731805 kubelet[3458]: E0129 12:02:04.731366 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4"} Jan 29 12:02:04.731805 kubelet[3458]: E0129 12:02:04.731411 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:04.731805 kubelet[3458]: E0129 12:02:04.731525 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:02:04.733515 containerd[1818]: time="2025-01-29T12:02:04.733463571Z" level=error msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" failed" error="failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:04.733745 kubelet[3458]: E0129 12:02:04.733704 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:04.733835 kubelet[3458]: E0129 12:02:04.733756 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd"} Jan 29 12:02:04.733835 kubelet[3458]: E0129 12:02:04.733799 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:04.733943 kubelet[3458]: E0129 12:02:04.733836 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r59h8" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" Jan 29 12:02:10.672828 containerd[1818]: time="2025-01-29T12:02:10.672775793Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:02:10.705201 containerd[1818]: time="2025-01-29T12:02:10.705132640Z" level=error msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" failed" error="failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:10.705491 kubelet[3458]: E0129 12:02:10.705432 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:10.705982 kubelet[3458]: E0129 12:02:10.705510 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864"} Jan 29 12:02:10.705982 kubelet[3458]: E0129 12:02:10.705559 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:10.705982 kubelet[3458]: E0129 12:02:10.705593 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" Jan 29 12:02:11.671500 containerd[1818]: time="2025-01-29T12:02:11.671133560Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:02:11.703728 containerd[1818]: time="2025-01-29T12:02:11.703663311Z" level=error msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" failed" error="failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.704566 kubelet[3458]: E0129 12:02:11.704156 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:11.704566 kubelet[3458]: E0129 12:02:11.704254 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f"} Jan 29 12:02:11.704566 kubelet[3458]: E0129 12:02:11.704325 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.704566 kubelet[3458]: E0129 12:02:11.704361 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podUID="2c4f2484-ef71-4219-a544-c54173b13527" Jan 29 12:02:12.672515 containerd[1818]: time="2025-01-29T12:02:12.672464695Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:02:12.700666 containerd[1818]: time="2025-01-29T12:02:12.700599245Z" level=error msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" failed" error="failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:12.700947 kubelet[3458]: E0129 12:02:12.700897 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:12.701363 kubelet[3458]: E0129 12:02:12.700964 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f"} Jan 29 12:02:12.701363 kubelet[3458]: E0129 12:02:12.701008 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:12.701363 kubelet[3458]: E0129 12:02:12.701041 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" Jan 29 12:02:14.674373 containerd[1818]: time="2025-01-29T12:02:14.674097143Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:02:14.705582 containerd[1818]: time="2025-01-29T12:02:14.705519869Z" level=error msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" failed" error="failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:14.705853 kubelet[3458]: E0129 12:02:14.705796 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:14.706351 kubelet[3458]: E0129 12:02:14.705871 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c"} Jan 29 12:02:14.706351 kubelet[3458]: E0129 12:02:14.705917 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:14.706351 kubelet[3458]: E0129 12:02:14.705947 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wqscz" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" Jan 29 12:02:15.670055 kubelet[3458]: I0129 12:02:15.669996 3458 scope.go:117] "RemoveContainer" containerID="1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa" Jan 29 12:02:15.674866 containerd[1818]: time="2025-01-29T12:02:15.674807264Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Jan 29 12:02:15.737909 containerd[1818]: time="2025-01-29T12:02:15.737848620Z" level=info msg="CreateContainer within sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\"" Jan 29 12:02:15.740254 containerd[1818]: time="2025-01-29T12:02:15.738789542Z" level=info msg="StartContainer for \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\"" Jan 29 12:02:15.825610 containerd[1818]: time="2025-01-29T12:02:15.825552847Z" level=info msg="StartContainer for \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\" returns successfully" Jan 29 12:02:15.921827 containerd[1818]: time="2025-01-29T12:02:15.921604366Z" level=info msg="shim disconnected" id=2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607 namespace=k8s.io Jan 29 12:02:15.921827 containerd[1818]: time="2025-01-29T12:02:15.921716569Z" level=warning msg="cleaning up after shim disconnected" id=2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607 namespace=k8s.io Jan 29 12:02:15.921827 containerd[1818]: time="2025-01-29T12:02:15.921733569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:15.956043 kubelet[3458]: I0129 12:02:15.955449 3458 scope.go:117] "RemoveContainer" containerID="1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa" Jan 29 12:02:15.956043 kubelet[3458]: I0129 12:02:15.955846 3458 scope.go:117] "RemoveContainer" containerID="2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607" Jan 29 12:02:15.957572 containerd[1818]: time="2025-01-29T12:02:15.956779079Z" level=info msg="RemoveContainer for \"1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa\"" Jan 29 12:02:15.957935 kubelet[3458]: E0129 12:02:15.957911 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-5mwpl_calico-system(61c07a8a-f154-4bb5-9a31-5f0a71d7b709)\"" pod="calico-system/calico-node-5mwpl" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" Jan 29 12:02:15.967735 containerd[1818]: time="2025-01-29T12:02:15.967696631Z" level=info msg="RemoveContainer for \"1093a43002790a4f69d9974300c5786f6a601db1df46ad590e866c980a1fc1fa\" returns successfully" Jan 29 12:02:16.672760 containerd[1818]: time="2025-01-29T12:02:16.671981797Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:02:16.701093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607-rootfs.mount: Deactivated successfully. Jan 29 12:02:16.704502 containerd[1818]: time="2025-01-29T12:02:16.704413939Z" level=error msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" failed" error="failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:16.705086 kubelet[3458]: E0129 12:02:16.704754 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:16.705086 kubelet[3458]: E0129 12:02:16.704823 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd"} Jan 29 12:02:16.705086 kubelet[3458]: E0129 12:02:16.704884 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:16.705086 kubelet[3458]: E0129 12:02:16.704920 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r59h8" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" Jan 29 12:02:18.672189 containerd[1818]: time="2025-01-29T12:02:18.672143849Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:02:18.700261 containerd[1818]: time="2025-01-29T12:02:18.700197331Z" level=error msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" failed" error="failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:18.700592 kubelet[3458]: E0129 12:02:18.700498 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:18.700592 kubelet[3458]: E0129 12:02:18.700583 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4"} Jan 29 12:02:18.700997 kubelet[3458]: E0129 12:02:18.700627 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:18.700997 kubelet[3458]: E0129 12:02:18.700660 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:02:22.672777 containerd[1818]: time="2025-01-29T12:02:22.672679201Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:02:22.704551 containerd[1818]: time="2025-01-29T12:02:22.704480634Z" level=error msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" failed" error="failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:22.704825 kubelet[3458]: E0129 12:02:22.704773 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:22.705315 kubelet[3458]: E0129 12:02:22.704844 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864"} Jan 29 12:02:22.705315 kubelet[3458]: E0129 12:02:22.704892 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:22.705507 kubelet[3458]: E0129 12:02:22.705328 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74643b96-e115-4d67-8115-6c9ce09d0502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podUID="74643b96-e115-4d67-8115-6c9ce09d0502" Jan 29 12:02:25.672011 containerd[1818]: time="2025-01-29T12:02:25.671220582Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:02:25.672011 containerd[1818]: time="2025-01-29T12:02:25.671921398Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:02:25.720062 containerd[1818]: time="2025-01-29T12:02:25.719993601Z" level=error msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" failed" error="failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:25.720281 containerd[1818]: time="2025-01-29T12:02:25.720012102Z" level=error msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" failed" error="failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:25.720446 kubelet[3458]: E0129 12:02:25.720325 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:25.720446 kubelet[3458]: E0129 12:02:25.720399 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f"} Jan 29 12:02:25.720953 kubelet[3458]: E0129 12:02:25.720472 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:25.720953 kubelet[3458]: E0129 12:02:25.720500 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f"} Jan 29 12:02:25.720953 kubelet[3458]: E0129 12:02:25.720536 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:25.720953 kubelet[3458]: E0129 12:02:25.720570 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c4f2484-ef71-4219-a544-c54173b13527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podUID="2c4f2484-ef71-4219-a544-c54173b13527" Jan 29 12:02:25.721139 kubelet[3458]: E0129 12:02:25.720645 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:25.721139 kubelet[3458]: E0129 12:02:25.720672 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50dd55e-6299-4267-acbd-6f981eec9b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podUID="b50dd55e-6299-4267-acbd-6f981eec9b80" Jan 29 12:02:25.749599 kubelet[3458]: I0129 12:02:25.748441 3458 scope.go:117] "RemoveContainer" containerID="2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607" Jan 29 12:02:25.749599 kubelet[3458]: E0129 12:02:25.749288 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-5mwpl_calico-system(61c07a8a-f154-4bb5-9a31-5f0a71d7b709)\"" pod="calico-system/calico-node-5mwpl" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" Jan 29 12:02:26.671299 containerd[1818]: time="2025-01-29T12:02:26.671101726Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:02:26.713734 containerd[1818]: time="2025-01-29T12:02:26.713679103Z" level=error msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" failed" error="failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:26.714342 kubelet[3458]: E0129 12:02:26.713932 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:26.714342 kubelet[3458]: E0129 12:02:26.714002 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c"} Jan 29 12:02:26.714342 kubelet[3458]: E0129 12:02:26.714051 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:26.714342 kubelet[3458]: E0129 12:02:26.714085 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b50e5645-b858-440a-ac13-91ab3ef24687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wqscz" podUID="b50e5645-b858-440a-ac13-91ab3ef24687" Jan 29 12:02:31.671076 containerd[1818]: time="2025-01-29T12:02:31.670916956Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:02:31.714093 containerd[1818]: time="2025-01-29T12:02:31.714030345Z" level=error msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" failed" error="failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:31.714491 kubelet[3458]: E0129 12:02:31.714389 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:31.714970 kubelet[3458]: E0129 12:02:31.714522 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd"} Jan 29 12:02:31.714970 kubelet[3458]: E0129 12:02:31.714575 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:31.714970 kubelet[3458]: E0129 12:02:31.714610 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8277e1b7-9bd3-41a3-9974-0a547b3a9790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r59h8" podUID="8277e1b7-9bd3-41a3-9974-0a547b3a9790" Jan 29 12:02:32.940762 containerd[1818]: time="2025-01-29T12:02:32.940698671Z" level=info msg="StopPodSandbox for \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\"" Jan 29 12:02:32.941476 containerd[1818]: time="2025-01-29T12:02:32.940788373Z" level=info msg="Container to stop \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:32.941476 containerd[1818]: time="2025-01-29T12:02:32.940813573Z" level=info msg="Container to stop \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:32.941476 containerd[1818]: time="2025-01-29T12:02:32.940826274Z" level=info msg="Container to stop \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:32.954338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088-shm.mount: Deactivated successfully. Jan 29 12:02:33.004318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088-rootfs.mount: Deactivated successfully. Jan 29 12:02:33.028359 containerd[1818]: time="2025-01-29T12:02:33.028228893Z" level=info msg="shim disconnected" id=cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088 namespace=k8s.io Jan 29 12:02:33.033779 containerd[1818]: time="2025-01-29T12:02:33.029643129Z" level=warning msg="cleaning up after shim disconnected" id=cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088 namespace=k8s.io Jan 29 12:02:33.033779 containerd[1818]: time="2025-01-29T12:02:33.029785733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:33.061341 containerd[1818]: time="2025-01-29T12:02:33.061224331Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:02:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:02:33.064746 containerd[1818]: time="2025-01-29T12:02:33.064693919Z" level=info msg="TearDown network for sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" successfully" Jan 29 12:02:33.064746 containerd[1818]: time="2025-01-29T12:02:33.064743720Z" level=info msg="StopPodSandbox for \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" returns successfully" Jan 29 12:02:33.198646 kubelet[3458]: I0129 12:02:33.196596 3458 topology_manager.go:215] "Topology Admit Handler" podUID="a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952" podNamespace="calico-system" podName="calico-node-f975b" Jan 29 12:02:33.200545 kubelet[3458]: E0129 12:02:33.198665 3458 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="flexvol-driver" Jan 29 12:02:33.200545 kubelet[3458]: E0129 12:02:33.198687 3458 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="install-cni" Jan 29 12:02:33.200545 kubelet[3458]: E0129 12:02:33.198696 3458 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.200545 kubelet[3458]: E0129 12:02:33.198705 3458 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.200545 kubelet[3458]: E0129 12:02:33.198714 3458 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.200545 kubelet[3458]: I0129 12:02:33.198766 3458 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.200545 kubelet[3458]: I0129 12:02:33.198777 3458 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.200545 kubelet[3458]: I0129 12:02:33.198784 3458 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" containerName="calico-node" Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259082 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-tigera-ca-bundle\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259160 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-bin-dir\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259210 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259308 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-lib-modules\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259355 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.259562 kubelet[3458]: I0129 12:02:33.259406 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-lib-calico\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.259976 kubelet[3458]: I0129 12:02:33.259459 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.259976 kubelet[3458]: I0129 12:02:33.259564 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-node-certs\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260104 kubelet[3458]: I0129 12:02:33.260073 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-flexvol-driver-host\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260160 kubelet[3458]: I0129 12:02:33.260131 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-run-calico\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260207 kubelet[3458]: I0129 12:02:33.260164 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vh4h\" (UniqueName: \"kubernetes.io/projected/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-kube-api-access-5vh4h\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260207 kubelet[3458]: I0129 12:02:33.260187 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-policysync\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260291 kubelet[3458]: I0129 12:02:33.260219 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-net-dir\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260291 kubelet[3458]: I0129 12:02:33.260242 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-log-dir\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260377 kubelet[3458]: I0129 12:02:33.260264 3458 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-xtables-lock\") pod \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\" (UID: \"61c07a8a-f154-4bb5-9a31-5f0a71d7b709\") " Jan 29 12:02:33.260435 kubelet[3458]: I0129 12:02:33.260385 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-var-run-calico\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260489 kubelet[3458]: I0129 12:02:33.260437 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-flexvol-driver-host\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260489 kubelet[3458]: I0129 12:02:33.260466 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-cni-net-dir\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260581 kubelet[3458]: I0129 12:02:33.260496 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-node-certs\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260581 kubelet[3458]: I0129 12:02:33.260537 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-tigera-ca-bundle\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260581 kubelet[3458]: I0129 12:02:33.260564 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-var-lib-calico\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260707 kubelet[3458]: I0129 12:02:33.260604 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-cni-log-dir\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260707 kubelet[3458]: I0129 12:02:33.260631 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-policysync\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260707 kubelet[3458]: I0129 12:02:33.260689 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7vm\" (UniqueName: \"kubernetes.io/projected/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-kube-api-access-fm7vm\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260833 kubelet[3458]: I0129 12:02:33.260719 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-xtables-lock\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260833 kubelet[3458]: I0129 12:02:33.260763 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-cni-bin-dir\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260833 kubelet[3458]: I0129 12:02:33.260807 3458 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952-lib-modules\") pod \"calico-node-f975b\" (UID: \"a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952\") " pod="calico-system/calico-node-f975b" Jan 29 12:02:33.260958 kubelet[3458]: I0129 12:02:33.260859 3458 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-bin-dir\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.260958 kubelet[3458]: I0129 12:02:33.260874 3458 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-lib-modules\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.260958 kubelet[3458]: I0129 12:02:33.260887 3458 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-lib-calico\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.261084 kubelet[3458]: I0129 12:02:33.260962 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.261084 kubelet[3458]: I0129 12:02:33.261003 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.263445 kubelet[3458]: I0129 12:02:33.261864 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-policysync" (OuterVolumeSpecName: "policysync") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.263445 kubelet[3458]: I0129 12:02:33.261952 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.263445 kubelet[3458]: I0129 12:02:33.261999 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.263445 kubelet[3458]: I0129 12:02:33.262029 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:33.268497 kubelet[3458]: I0129 12:02:33.268033 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:02:33.269793 systemd[1]: var-lib-kubelet-pods-61c07a8a\x2df154\x2d4bb5\x2d9a31\x2d5f0a71d7b709-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 12:02:33.274457 systemd[1]: var-lib-kubelet-pods-61c07a8a\x2df154\x2d4bb5\x2d9a31\x2d5f0a71d7b709-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vh4h.mount: Deactivated successfully. Jan 29 12:02:33.280641 systemd[1]: var-lib-kubelet-pods-61c07a8a\x2df154\x2d4bb5\x2d9a31\x2d5f0a71d7b709-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 12:02:33.282512 kubelet[3458]: I0129 12:02:33.282458 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-node-certs" (OuterVolumeSpecName: "node-certs") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:02:33.282638 kubelet[3458]: I0129 12:02:33.282602 3458 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-kube-api-access-5vh4h" (OuterVolumeSpecName: "kube-api-access-5vh4h") pod "61c07a8a-f154-4bb5-9a31-5f0a71d7b709" (UID: "61c07a8a-f154-4bb5-9a31-5f0a71d7b709"). InnerVolumeSpecName "kube-api-access-5vh4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:02:33.361987 kubelet[3458]: I0129 12:02:33.361915 3458 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-tigera-ca-bundle\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.361987 kubelet[3458]: I0129 12:02:33.361978 3458 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-node-certs\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.361987 kubelet[3458]: I0129 12:02:33.361992 3458 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-net-dir\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362028 3458 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-cni-log-dir\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362039 3458 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-xtables-lock\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362051 3458 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-flexvol-driver-host\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362063 3458 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-var-run-calico\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362075 3458 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5vh4h\" (UniqueName: \"kubernetes.io/projected/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-kube-api-access-5vh4h\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.362351 kubelet[3458]: I0129 12:02:33.362101 3458 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61c07a8a-f154-4bb5-9a31-5f0a71d7b709-policysync\") on node \"ci-4081.3.0-a-b5939ece28\" DevicePath \"\"" Jan 29 12:02:33.513131 containerd[1818]: time="2025-01-29T12:02:33.512980303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f975b,Uid:a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:33.551730 containerd[1818]: time="2025-01-29T12:02:33.551244474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:33.551963 containerd[1818]: time="2025-01-29T12:02:33.551504881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:33.551963 containerd[1818]: time="2025-01-29T12:02:33.551536282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:33.551963 containerd[1818]: time="2025-01-29T12:02:33.551650485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:33.603655 containerd[1818]: time="2025-01-29T12:02:33.603565503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f975b,Uid:a3f1b2ce-8adc-45e5-9eb0-5b04d41ae952,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\"" Jan 29 12:02:33.609630 containerd[1818]: time="2025-01-29T12:02:33.609466753Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:02:33.649363 containerd[1818]: time="2025-01-29T12:02:33.649085859Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ec184e40fb7a4fb4f28e579edba3138e1ec9bdc2b29af1d6075e50fb3b7712d4\"" Jan 29 12:02:33.650541 containerd[1818]: time="2025-01-29T12:02:33.650491895Z" level=info msg="StartContainer for \"ec184e40fb7a4fb4f28e579edba3138e1ec9bdc2b29af1d6075e50fb3b7712d4\"" Jan 29 12:02:33.682080 containerd[1818]: time="2025-01-29T12:02:33.682032896Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:02:33.742529 containerd[1818]: time="2025-01-29T12:02:33.742347727Z" level=info msg="StartContainer for \"ec184e40fb7a4fb4f28e579edba3138e1ec9bdc2b29af1d6075e50fb3b7712d4\" returns successfully" Jan 29 12:02:33.742529 containerd[1818]: time="2025-01-29T12:02:33.742508731Z" level=error msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" failed" error="failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:33.746019 kubelet[3458]: E0129 12:02:33.745947 3458 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:33.746196 kubelet[3458]: E0129 12:02:33.746030 3458 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4"} Jan 29 12:02:33.746196 kubelet[3458]: E0129 12:02:33.746076 3458 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:33.746196 kubelet[3458]: E0129 12:02:33.746104 3458 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8235b86-2d3c-40e3-bcdb-97985970fdbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjbtv" podUID="b8235b86-2d3c-40e3-bcdb-97985970fdbe" Jan 29 12:02:33.855758 containerd[1818]: time="2025-01-29T12:02:33.855569802Z" level=info msg="shim disconnected" id=ec184e40fb7a4fb4f28e579edba3138e1ec9bdc2b29af1d6075e50fb3b7712d4 namespace=k8s.io Jan 29 12:02:33.855758 containerd[1818]: time="2025-01-29T12:02:33.855634904Z" level=warning msg="cleaning up after shim disconnected" id=ec184e40fb7a4fb4f28e579edba3138e1ec9bdc2b29af1d6075e50fb3b7712d4 namespace=k8s.io Jan 29 12:02:33.855758 containerd[1818]: time="2025-01-29T12:02:33.855647404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:34.021048 containerd[1818]: time="2025-01-29T12:02:34.020552092Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:02:34.022581 kubelet[3458]: I0129 12:02:34.021395 3458 scope.go:117] "RemoveContainer" containerID="2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607" Jan 29 12:02:34.025989 containerd[1818]: time="2025-01-29T12:02:34.025934228Z" level=info msg="RemoveContainer for \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\"" Jan 29 12:02:34.040636 containerd[1818]: time="2025-01-29T12:02:34.040575200Z" level=info msg="RemoveContainer for \"2b69b50bc16ad53d591897fc54a695f78a351fc1ef3c964e1bdf4e3bdf87a607\" returns successfully" Jan 29 12:02:34.041594 kubelet[3458]: I0129 12:02:34.041523 3458 scope.go:117] "RemoveContainer" containerID="f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005" Jan 29 12:02:34.043632 containerd[1818]: time="2025-01-29T12:02:34.043594777Z" level=info msg="RemoveContainer for \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\"" Jan 29 12:02:34.059281 containerd[1818]: time="2025-01-29T12:02:34.059226274Z" level=info msg="RemoveContainer for \"f29a5f02faf2b332214db3a8fbc81c7d2befcb51a7188274e72acf898d2e6005\" returns successfully" Jan 29 12:02:34.061002 kubelet[3458]: I0129 12:02:34.059595 3458 scope.go:117] "RemoveContainer" containerID="92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2" Jan 29 12:02:34.066834 containerd[1818]: time="2025-01-29T12:02:34.066785966Z" level=info msg="RemoveContainer for \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\"" Jan 29 12:02:34.090900 containerd[1818]: time="2025-01-29T12:02:34.090021856Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81\"" Jan 29 12:02:34.090900 containerd[1818]: time="2025-01-29T12:02:34.090234161Z" level=info msg="RemoveContainer for \"92148754b6454d2a7cbcbe036f213a88b55be6dc1470883cf183f6544a8160c2\" returns successfully" Jan 29 12:02:34.092825 containerd[1818]: time="2025-01-29T12:02:34.092778926Z" level=info msg="StartContainer for \"1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81\"" Jan 29 12:02:34.232031 containerd[1818]: time="2025-01-29T12:02:34.231974760Z" level=info msg="StartContainer for \"1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81\" returns successfully" Jan 29 12:02:34.678900 kubelet[3458]: I0129 12:02:34.678852 3458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c07a8a-f154-4bb5-9a31-5f0a71d7b709" path="/var/lib/kubelet/pods/61c07a8a-f154-4bb5-9a31-5f0a71d7b709/volumes" Jan 29 12:02:35.001850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81-rootfs.mount: Deactivated successfully. Jan 29 12:02:35.011805 containerd[1818]: time="2025-01-29T12:02:35.011675760Z" level=info msg="shim disconnected" id=1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81 namespace=k8s.io Jan 29 12:02:35.011805 containerd[1818]: time="2025-01-29T12:02:35.011802663Z" level=warning msg="cleaning up after shim disconnected" id=1ff43f7427e4a7052c398bb3f95fec83035fa73a37ce5cbc4f2067b1aa6c7b81 namespace=k8s.io Jan 29 12:02:35.011805 containerd[1818]: time="2025-01-29T12:02:35.011815863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:36.062958 containerd[1818]: time="2025-01-29T12:02:36.062709349Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:02:36.124185 containerd[1818]: time="2025-01-29T12:02:36.124130709Z" level=info msg="CreateContainer within sandbox \"7a6a13b034801175887e4108c03f814002df1fe80842bdd5bcd6bfa6ab745212\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"842585670ff3038206dfb3a21a456cba592d60795d43420f997839d570129ce6\"" Jan 29 12:02:36.127688 containerd[1818]: time="2025-01-29T12:02:36.127651898Z" level=info msg="StartContainer for \"842585670ff3038206dfb3a21a456cba592d60795d43420f997839d570129ce6\"" Jan 29 12:02:36.207304 containerd[1818]: time="2025-01-29T12:02:36.207255419Z" level=info msg="StartContainer for \"842585670ff3038206dfb3a21a456cba592d60795d43420f997839d570129ce6\" returns successfully" Jan 29 12:02:37.086098 systemd[1]: run-containerd-runc-k8s.io-842585670ff3038206dfb3a21a456cba592d60795d43420f997839d570129ce6-runc.yYbQLo.mount: Deactivated successfully. Jan 29 12:02:37.090552 kubelet[3458]: I0129 12:02:37.089923 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f975b" podStartSLOduration=4.089885032 podStartE2EDuration="4.089885032s" podCreationTimestamp="2025-01-29 12:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:37.085748727 +0000 UTC m=+87.078064664" watchObservedRunningTime="2025-01-29 12:02:37.089885032 +0000 UTC m=+87.082201069" Jan 29 12:02:37.674593 containerd[1818]: time="2025-01-29T12:02:37.674537379Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:02:37.675160 containerd[1818]: time="2025-01-29T12:02:37.675024791Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:02:37.676700 containerd[1818]: time="2025-01-29T12:02:37.676342124Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.857 [INFO][5523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.857 [INFO][5523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" iface="eth0" netns="/var/run/netns/cni-83080bf3-19de-b998-321c-d96f26864a95" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.858 [INFO][5523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" iface="eth0" netns="/var/run/netns/cni-83080bf3-19de-b998-321c-d96f26864a95" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.861 [INFO][5523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" iface="eth0" netns="/var/run/netns/cni-83080bf3-19de-b998-321c-d96f26864a95" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.867 [INFO][5523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:37.867 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.005 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.017 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.017 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.068 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.068 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.073 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.090746 containerd[1818]: 2025-01-29 12:02:38.083 [INFO][5523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:02:38.100521 containerd[1818]: time="2025-01-29T12:02:38.092556493Z" level=info msg="TearDown network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" successfully" Jan 29 12:02:38.100521 containerd[1818]: time="2025-01-29T12:02:38.093474817Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" returns successfully" Jan 29 12:02:38.103818 containerd[1818]: time="2025-01-29T12:02:38.103774278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-fll92,Uid:74643b96-e115-4d67-8115-6c9ce09d0502,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:02:38.105095 systemd[1]: run-netns-cni\x2d83080bf3\x2d19de\x2db998\x2d321c\x2dd96f26864a95.mount: Deactivated successfully. Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.897 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.898 [INFO][5527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" iface="eth0" netns="/var/run/netns/cni-a1d1aa2f-66f4-7b99-93f3-9875ed9e2239" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.898 [INFO][5527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" iface="eth0" netns="/var/run/netns/cni-a1d1aa2f-66f4-7b99-93f3-9875ed9e2239" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.899 [INFO][5527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" iface="eth0" netns="/var/run/netns/cni-a1d1aa2f-66f4-7b99-93f3-9875ed9e2239" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.899 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:37.899 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.070 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.080 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.080 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.104 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.104 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.110 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.127748 containerd[1818]: 2025-01-29 12:02:38.117 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:02:38.133556 containerd[1818]: time="2025-01-29T12:02:38.133506233Z" level=info msg="TearDown network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" successfully" Jan 29 12:02:38.133556 containerd[1818]: time="2025-01-29T12:02:38.133548534Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" returns successfully" Jan 29 12:02:38.135368 containerd[1818]: time="2025-01-29T12:02:38.135300779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-mx8dw,Uid:2c4f2484-ef71-4219-a544-c54173b13527,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:02:38.136908 systemd[1]: run-netns-cni\x2da1d1aa2f\x2d66f4\x2d7b99\x2d93f3\x2d9875ed9e2239.mount: Deactivated successfully. Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.918 [INFO][5528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.919 [INFO][5528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" iface="eth0" netns="/var/run/netns/cni-1b9559c2-c621-6fd5-fe51-a8aa5d6e4ea3" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.919 [INFO][5528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" iface="eth0" netns="/var/run/netns/cni-1b9559c2-c621-6fd5-fe51-a8aa5d6e4ea3" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.921 [INFO][5528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" iface="eth0" netns="/var/run/netns/cni-1b9559c2-c621-6fd5-fe51-a8aa5d6e4ea3" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.922 [INFO][5528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:37.922 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.130 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.131 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.138 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.154 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.154 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.157 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.180995 containerd[1818]: 2025-01-29 12:02:38.172 [INFO][5528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:02:38.194396 containerd[1818]: time="2025-01-29T12:02:38.190202773Z" level=info msg="TearDown network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" successfully" Jan 29 12:02:38.194396 containerd[1818]: time="2025-01-29T12:02:38.190317376Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" returns successfully" Jan 29 12:02:38.199747 systemd[1]: run-netns-cni\x2d1b9559c2\x2dc621\x2d6fd5\x2dfe51\x2da8aa5d6e4ea3.mount: Deactivated successfully. Jan 29 12:02:38.204309 containerd[1818]: time="2025-01-29T12:02:38.204266330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6564b8cd7d-xflv7,Uid:b50dd55e-6299-4267-acbd-6f981eec9b80,Namespace:calico-system,Attempt:1,}" Jan 29 12:02:38.530454 kernel: bpftool[5697]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:02:38.623636 systemd-networkd[1392]: cali3c06ce66266: Link UP Jan 29 12:02:38.624533 systemd-networkd[1392]: cali3c06ce66266: Gained carrier Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.329 [INFO][5607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0 calico-apiserver-8cd77c95d- calico-apiserver 74643b96-e115-4d67-8115-6c9ce09d0502 930 0 2025-01-29 12:01:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cd77c95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 calico-apiserver-8cd77c95d-fll92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c06ce66266 [] []}} ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.329 [INFO][5607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.499 [INFO][5652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" HandleID="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.522 [INFO][5652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" HandleID="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003056f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b5939ece28", "pod":"calico-apiserver-8cd77c95d-fll92", "timestamp":"2025-01-29 12:02:38.499909238 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.522 [INFO][5652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.522 [INFO][5652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.522 [INFO][5652] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.530 [INFO][5652] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.543 [INFO][5652] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.556 [INFO][5652] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.559 [INFO][5652] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.564 [INFO][5652] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.564 [INFO][5652] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.574 [INFO][5652] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529 Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.589 [INFO][5652] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5652] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.129/26] block=192.168.115.128/26 handle="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5652] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.129/26] handle="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.665671 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.129/26] IPv6=[] ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" HandleID="k8s-pod-network.56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.604 [INFO][5607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"74643b96-e115-4d67-8115-6c9ce09d0502", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"calico-apiserver-8cd77c95d-fll92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c06ce66266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.604 [INFO][5607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.129/32] ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.605 [INFO][5607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c06ce66266 ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.623 [INFO][5607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.623 [INFO][5607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"74643b96-e115-4d67-8115-6c9ce09d0502", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529", Pod:"calico-apiserver-8cd77c95d-fll92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c06ce66266", MAC:"66:d5:90:e3:3d:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.667751 containerd[1818]: 2025-01-29 12:02:38.658 [INFO][5607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-fll92" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:02:38.713888 systemd-networkd[1392]: calidec73df0337: Link UP Jan 29 12:02:38.718197 systemd-networkd[1392]: calidec73df0337: Gained carrier Jan 29 12:02:38.732753 containerd[1818]: time="2025-01-29T12:02:38.732355819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:38.732753 containerd[1818]: time="2025-01-29T12:02:38.732601525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:38.732753 containerd[1818]: time="2025-01-29T12:02:38.732665526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.734874 containerd[1818]: time="2025-01-29T12:02:38.733703450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.396 [INFO][5623] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0 calico-apiserver-8cd77c95d- calico-apiserver 2c4f2484-ef71-4219-a544-c54173b13527 931 0 2025-01-29 12:01:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cd77c95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 calico-apiserver-8cd77c95d-mx8dw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidec73df0337 [] []}} ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.402 [INFO][5623] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.544 [INFO][5666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" HandleID="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.564 [INFO][5666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" HandleID="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002843e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b5939ece28", "pod":"calico-apiserver-8cd77c95d-mx8dw", "timestamp":"2025-01-29 12:02:38.544606373 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.564 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.601 [INFO][5666] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.605 [INFO][5666] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.615 [INFO][5666] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.625 [INFO][5666] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.634 [INFO][5666] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.639 [INFO][5666] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.639 [INFO][5666] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.649 [INFO][5666] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.674 [INFO][5666] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.695 [INFO][5666] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.130/26] block=192.168.115.128/26 handle="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.695 [INFO][5666] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.130/26] handle="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.696 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.762187 containerd[1818]: 2025-01-29 12:02:38.696 [INFO][5666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.130/26] IPv6=[] ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" HandleID="k8s-pod-network.ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.703 [INFO][5623] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4f2484-ef71-4219-a544-c54173b13527", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"calico-apiserver-8cd77c95d-mx8dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidec73df0337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.704 [INFO][5623] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.130/32] ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.704 [INFO][5623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidec73df0337 ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.721 [INFO][5623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.725 [INFO][5623] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4f2484-ef71-4219-a544-c54173b13527", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee", Pod:"calico-apiserver-8cd77c95d-mx8dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidec73df0337", MAC:"8e:5a:68:94:57:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.763218 containerd[1818]: 2025-01-29 12:02:38.759 [INFO][5623] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee" Namespace="calico-apiserver" Pod="calico-apiserver-8cd77c95d-mx8dw" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:02:38.814393 systemd-networkd[1392]: cali8d1c60c22f4: Link UP Jan 29 12:02:38.814795 systemd-networkd[1392]: cali8d1c60c22f4: Gained carrier Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.452 [INFO][5639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0 calico-kube-controllers-6564b8cd7d- calico-system b50dd55e-6299-4267-acbd-6f981eec9b80 932 0 2025-01-29 12:01:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6564b8cd7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 calico-kube-controllers-6564b8cd7d-xflv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8d1c60c22f4 [] []}} ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.455 [INFO][5639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.579 [INFO][5683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" HandleID="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.599 [INFO][5683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" HandleID="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e8ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b5939ece28", "pod":"calico-kube-controllers-6564b8cd7d-xflv7", "timestamp":"2025-01-29 12:02:38.579015446 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.599 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.696 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.696 [INFO][5683] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.700 [INFO][5683] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.723 [INFO][5683] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.737 [INFO][5683] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.748 [INFO][5683] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.758 [INFO][5683] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.758 [INFO][5683] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.761 [INFO][5683] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7 Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.771 [INFO][5683] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.788 [INFO][5683] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.131/26] block=192.168.115.128/26 handle="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.789 [INFO][5683] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.131/26] handle="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.790 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.851293 containerd[1818]: 2025-01-29 12:02:38.791 [INFO][5683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.131/26] IPv6=[] ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" HandleID="k8s-pod-network.eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.806 [INFO][5639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0", GenerateName:"calico-kube-controllers-6564b8cd7d-", Namespace:"calico-system", SelfLink:"", UID:"b50dd55e-6299-4267-acbd-6f981eec9b80", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6564b8cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"calico-kube-controllers-6564b8cd7d-xflv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d1c60c22f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.806 [INFO][5639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.131/32] ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.806 [INFO][5639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d1c60c22f4 ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.811 [INFO][5639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.812 [INFO][5639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0", GenerateName:"calico-kube-controllers-6564b8cd7d-", Namespace:"calico-system", SelfLink:"", UID:"b50dd55e-6299-4267-acbd-6f981eec9b80", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6564b8cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7", Pod:"calico-kube-controllers-6564b8cd7d-xflv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d1c60c22f4", MAC:"a6:75:a4:98:b8:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.852296 containerd[1818]: 2025-01-29 12:02:38.837 [INFO][5639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7" Namespace="calico-system" Pod="calico-kube-controllers-6564b8cd7d-xflv7" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:02:38.879188 containerd[1818]: time="2025-01-29T12:02:38.877261457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:38.879188 containerd[1818]: time="2025-01-29T12:02:38.877632565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:38.879188 containerd[1818]: time="2025-01-29T12:02:38.877705867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.879188 containerd[1818]: time="2025-01-29T12:02:38.877944472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.972187 containerd[1818]: time="2025-01-29T12:02:38.972123142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-fll92,Uid:74643b96-e115-4d67-8115-6c9ce09d0502,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529\"" Jan 29 12:02:38.979090 containerd[1818]: time="2025-01-29T12:02:38.978984600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:38.979586 containerd[1818]: time="2025-01-29T12:02:38.979543512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:38.979832 containerd[1818]: time="2025-01-29T12:02:38.979789218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.980230 containerd[1818]: time="2025-01-29T12:02:38.980190827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.986476 containerd[1818]: time="2025-01-29T12:02:38.986400570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:02:39.034770 containerd[1818]: time="2025-01-29T12:02:39.034708383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd77c95d-mx8dw,Uid:2c4f2484-ef71-4219-a544-c54173b13527,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee\"" Jan 29 12:02:39.133413 containerd[1818]: time="2025-01-29T12:02:39.132969846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6564b8cd7d-xflv7,Uid:b50dd55e-6299-4267-acbd-6f981eec9b80,Namespace:calico-system,Attempt:1,} returns sandbox id \"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7\"" Jan 29 12:02:39.185813 systemd-networkd[1392]: vxlan.calico: Link UP Jan 29 12:02:39.185823 systemd-networkd[1392]: vxlan.calico: Gained carrier Jan 29 12:02:39.672558 containerd[1818]: time="2025-01-29T12:02:39.672145565Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:02:39.710634 systemd-networkd[1392]: cali3c06ce66266: Gained IPv6LL Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.732 [INFO][5949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.733 [INFO][5949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" iface="eth0" netns="/var/run/netns/cni-bc7622e2-49a4-ce94-365a-d28edb72a89a" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.733 [INFO][5949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" iface="eth0" netns="/var/run/netns/cni-bc7622e2-49a4-ce94-365a-d28edb72a89a" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.734 [INFO][5949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" iface="eth0" netns="/var/run/netns/cni-bc7622e2-49a4-ce94-365a-d28edb72a89a" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.734 [INFO][5949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.734 [INFO][5949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.756 [INFO][5957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.756 [INFO][5957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.756 [INFO][5957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.762 [WARNING][5957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.762 [INFO][5957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.764 [INFO][5957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:39.766965 containerd[1818]: 2025-01-29 12:02:39.765 [INFO][5949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:02:39.771181 containerd[1818]: time="2025-01-29T12:02:39.767138552Z" level=info msg="TearDown network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" successfully" Jan 29 12:02:39.771181 containerd[1818]: time="2025-01-29T12:02:39.767180553Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" returns successfully" Jan 29 12:02:39.771181 containerd[1818]: time="2025-01-29T12:02:39.769148199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqscz,Uid:b50e5645-b858-440a-ac13-91ab3ef24687,Namespace:kube-system,Attempt:1,}" Jan 29 12:02:39.774163 systemd[1]: run-netns-cni\x2dbc7622e2\x2d49a4\x2dce94\x2d365a\x2dd28edb72a89a.mount: Deactivated successfully. Jan 29 12:02:39.841594 systemd-networkd[1392]: calidec73df0337: Gained IPv6LL Jan 29 12:02:39.988457 systemd-networkd[1392]: cali7f5b98f8786: Link UP Jan 29 12:02:39.990278 systemd-networkd[1392]: cali7f5b98f8786: Gained carrier Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.883 [INFO][5964] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0 coredns-7db6d8ff4d- kube-system b50e5645-b858-440a-ac13-91ab3ef24687 952 0 2025-01-29 12:01:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 coredns-7db6d8ff4d-wqscz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f5b98f8786 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.883 [INFO][5964] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.914 [INFO][5974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" HandleID="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.923 [INFO][5974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" HandleID="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002909f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b5939ece28", "pod":"coredns-7db6d8ff4d-wqscz", "timestamp":"2025-01-29 12:02:39.914372544 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.923 [INFO][5974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.923 [INFO][5974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.923 [INFO][5974] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.926 [INFO][5974] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.931 [INFO][5974] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.935 [INFO][5974] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.937 [INFO][5974] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.940 [INFO][5974] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.940 [INFO][5974] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.942 [INFO][5974] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5 Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.953 [INFO][5974] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.982 [INFO][5974] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.132/26] block=192.168.115.128/26 handle="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.983 [INFO][5974] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.132/26] handle="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.983 [INFO][5974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.011471 containerd[1818]: 2025-01-29 12:02:39.983 [INFO][5974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.132/26] IPv6=[] ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" HandleID="k8s-pod-network.e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:39.984 [INFO][5964] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b50e5645-b858-440a-ac13-91ab3ef24687", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"coredns-7db6d8ff4d-wqscz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f5b98f8786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:39.985 [INFO][5964] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.132/32] ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:39.985 [INFO][5964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f5b98f8786 ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:39.989 [INFO][5964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:39.991 [INFO][5964] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b50e5645-b858-440a-ac13-91ab3ef24687", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5", Pod:"coredns-7db6d8ff4d-wqscz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f5b98f8786", MAC:"76:d4:a4:03:ee:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.012796 containerd[1818]: 2025-01-29 12:02:40.006 [INFO][5964] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wqscz" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:02:40.044134 containerd[1818]: time="2025-01-29T12:02:40.043353214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:40.044134 containerd[1818]: time="2025-01-29T12:02:40.043529618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:40.044134 containerd[1818]: time="2025-01-29T12:02:40.043546419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:40.044134 containerd[1818]: time="2025-01-29T12:02:40.043732823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:40.115921 containerd[1818]: time="2025-01-29T12:02:40.115880085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqscz,Uid:b50e5645-b858-440a-ac13-91ab3ef24687,Namespace:kube-system,Attempt:1,} returns sandbox id \"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5\"" Jan 29 12:02:40.120103 containerd[1818]: time="2025-01-29T12:02:40.120062981Z" level=info msg="CreateContainer within sandbox \"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:40.158715 systemd-networkd[1392]: cali8d1c60c22f4: Gained IPv6LL Jan 29 12:02:40.190787 containerd[1818]: time="2025-01-29T12:02:40.190704908Z" level=info msg="CreateContainer within sandbox \"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"183090fcf22301192540bd52e6dd812fe041364b9223f5bbc034bf71f47e56f3\"" Jan 29 12:02:40.192007 containerd[1818]: time="2025-01-29T12:02:40.191947937Z" level=info msg="StartContainer for \"183090fcf22301192540bd52e6dd812fe041364b9223f5bbc034bf71f47e56f3\"" Jan 29 12:02:40.283911 containerd[1818]: time="2025-01-29T12:02:40.283776952Z" level=info msg="StartContainer for \"183090fcf22301192540bd52e6dd812fe041364b9223f5bbc034bf71f47e56f3\" returns successfully" Jan 29 12:02:40.415766 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Jan 29 12:02:41.091327 kubelet[3458]: I0129 12:02:41.091213 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wqscz" podStartSLOduration=77.091188255 podStartE2EDuration="1m17.091188255s" podCreationTimestamp="2025-01-29 12:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:41.090563841 +0000 UTC m=+91.082879878" watchObservedRunningTime="2025-01-29 12:02:41.091188255 +0000 UTC m=+91.083504192" Jan 29 12:02:41.439794 systemd-networkd[1392]: cali7f5b98f8786: Gained IPv6LL Jan 29 12:02:42.123598 containerd[1818]: time="2025-01-29T12:02:42.123536443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.126171 containerd[1818]: time="2025-01-29T12:02:42.125897998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:02:42.132700 containerd[1818]: time="2025-01-29T12:02:42.132038139Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.137787 containerd[1818]: time="2025-01-29T12:02:42.137731570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.138844 containerd[1818]: time="2025-01-29T12:02:42.138800095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.152151818s" Jan 29 12:02:42.138964 containerd[1818]: time="2025-01-29T12:02:42.138849696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:02:42.141308 containerd[1818]: time="2025-01-29T12:02:42.140819441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:02:42.144317 containerd[1818]: time="2025-01-29T12:02:42.144286621Z" level=info msg="CreateContainer within sandbox \"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:02:42.187055 containerd[1818]: time="2025-01-29T12:02:42.186989405Z" level=info msg="CreateContainer within sandbox \"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26703b8e59e801f4982ccefa4d4f76f9f849a9d14dcc0e20525468359e2afcdd\"" Jan 29 12:02:42.188001 containerd[1818]: time="2025-01-29T12:02:42.187839625Z" level=info msg="StartContainer for \"26703b8e59e801f4982ccefa4d4f76f9f849a9d14dcc0e20525468359e2afcdd\"" Jan 29 12:02:42.274246 containerd[1818]: time="2025-01-29T12:02:42.274174314Z" level=info msg="StartContainer for \"26703b8e59e801f4982ccefa4d4f76f9f849a9d14dcc0e20525468359e2afcdd\" returns successfully" Jan 29 12:02:42.465098 containerd[1818]: time="2025-01-29T12:02:42.465038012Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.469285 containerd[1818]: time="2025-01-29T12:02:42.469221609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:02:42.472153 containerd[1818]: time="2025-01-29T12:02:42.472098475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 331.235233ms" Jan 29 12:02:42.472153 containerd[1818]: time="2025-01-29T12:02:42.472156876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:02:42.473994 containerd[1818]: time="2025-01-29T12:02:42.473962818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:02:42.480310 containerd[1818]: time="2025-01-29T12:02:42.480250263Z" level=info msg="CreateContainer within sandbox \"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:02:42.519449 containerd[1818]: time="2025-01-29T12:02:42.517392219Z" level=info msg="CreateContainer within sandbox \"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"960c8c6206a3574972b8e0fa338b4c1506b0abdc2c3a453d5f1e45f2a577ce0f\"" Jan 29 12:02:42.520618 containerd[1818]: time="2025-01-29T12:02:42.520568892Z" level=info msg="StartContainer for \"960c8c6206a3574972b8e0fa338b4c1506b0abdc2c3a453d5f1e45f2a577ce0f\"" Jan 29 12:02:42.623717 containerd[1818]: time="2025-01-29T12:02:42.622684545Z" level=info msg="StartContainer for \"960c8c6206a3574972b8e0fa338b4c1506b0abdc2c3a453d5f1e45f2a577ce0f\" returns successfully" Jan 29 12:02:43.286062 kubelet[3458]: I0129 12:02:43.285978 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cd77c95d-mx8dw" podStartSLOduration=68.854932382 podStartE2EDuration="1m12.285952428s" podCreationTimestamp="2025-01-29 12:01:31 +0000 UTC" firstStartedPulling="2025-01-29 12:02:39.042626865 +0000 UTC m=+89.034942802" lastFinishedPulling="2025-01-29 12:02:42.473646811 +0000 UTC m=+92.465962848" observedRunningTime="2025-01-29 12:02:43.285535619 +0000 UTC m=+93.277851556" watchObservedRunningTime="2025-01-29 12:02:43.285952428 +0000 UTC m=+93.278268365" Jan 29 12:02:43.286728 kubelet[3458]: I0129 12:02:43.286115 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cd77c95d-fll92" podStartSLOduration=69.128995399 podStartE2EDuration="1m12.286105632s" podCreationTimestamp="2025-01-29 12:01:31 +0000 UTC" firstStartedPulling="2025-01-29 12:02:38.983089094 +0000 UTC m=+88.975405131" lastFinishedPulling="2025-01-29 12:02:42.140199327 +0000 UTC m=+92.132515364" observedRunningTime="2025-01-29 12:02:43.137761614 +0000 UTC m=+93.130077551" watchObservedRunningTime="2025-01-29 12:02:43.286105632 +0000 UTC m=+93.278421569" Jan 29 12:02:45.578770 containerd[1818]: time="2025-01-29T12:02:45.578698859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.581032 containerd[1818]: time="2025-01-29T12:02:45.580951111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:02:45.588773 containerd[1818]: time="2025-01-29T12:02:45.586943749Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.592529 containerd[1818]: time="2025-01-29T12:02:45.592186570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.593299 containerd[1818]: time="2025-01-29T12:02:45.593252995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.119107372s" Jan 29 12:02:45.593696 containerd[1818]: time="2025-01-29T12:02:45.593461400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:02:45.623790 containerd[1818]: time="2025-01-29T12:02:45.623707497Z" level=info msg="CreateContainer within sandbox \"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:02:45.664324 containerd[1818]: time="2025-01-29T12:02:45.664268731Z" level=info msg="CreateContainer within sandbox \"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a12ee4cd89e5499d780d16f4fc4eb20b0ead24fba32f470981e516c9a085f70\"" Jan 29 12:02:45.666469 containerd[1818]: time="2025-01-29T12:02:45.665017048Z" level=info msg="StartContainer for \"1a12ee4cd89e5499d780d16f4fc4eb20b0ead24fba32f470981e516c9a085f70\"" Jan 29 12:02:45.677516 containerd[1818]: time="2025-01-29T12:02:45.677462135Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:02:45.827156 containerd[1818]: time="2025-01-29T12:02:45.827099783Z" level=info msg="StartContainer for \"1a12ee4cd89e5499d780d16f4fc4eb20b0ead24fba32f470981e516c9a085f70\" returns successfully" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.835 [INFO][6201] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.835 [INFO][6201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" iface="eth0" netns="/var/run/netns/cni-5affc11d-e7df-4269-7567-4f4d41695536" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.835 [INFO][6201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" iface="eth0" netns="/var/run/netns/cni-5affc11d-e7df-4269-7567-4f4d41695536" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.836 [INFO][6201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" iface="eth0" netns="/var/run/netns/cni-5affc11d-e7df-4269-7567-4f4d41695536" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.837 [INFO][6201] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.837 [INFO][6201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.883 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.884 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.884 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.892 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.893 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.932 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:45.939021 containerd[1818]: 2025-01-29 12:02:45.935 [INFO][6201] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:02:45.939864 containerd[1818]: time="2025-01-29T12:02:45.939713678Z" level=info msg="TearDown network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" successfully" Jan 29 12:02:45.939864 containerd[1818]: time="2025-01-29T12:02:45.939759679Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" returns successfully" Jan 29 12:02:45.941404 containerd[1818]: time="2025-01-29T12:02:45.940890405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r59h8,Uid:8277e1b7-9bd3-41a3-9974-0a547b3a9790,Namespace:kube-system,Attempt:1,}" Jan 29 12:02:46.147167 kubelet[3458]: I0129 12:02:46.146692 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6564b8cd7d-xflv7" podStartSLOduration=67.68555558 podStartE2EDuration="1m14.146667647s" podCreationTimestamp="2025-01-29 12:01:32 +0000 UTC" firstStartedPulling="2025-01-29 12:02:39.135460204 +0000 UTC m=+89.127776141" lastFinishedPulling="2025-01-29 12:02:45.596572271 +0000 UTC m=+95.588888208" observedRunningTime="2025-01-29 12:02:46.145903029 +0000 UTC m=+96.138219066" watchObservedRunningTime="2025-01-29 12:02:46.146667647 +0000 UTC m=+96.138983584" Jan 29 12:02:46.267591 systemd-networkd[1392]: calib40e5cbce4b: Link UP Jan 29 12:02:46.267877 systemd-networkd[1392]: calib40e5cbce4b: Gained carrier Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.088 [INFO][6237] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0 coredns-7db6d8ff4d- kube-system 8277e1b7-9bd3-41a3-9974-0a547b3a9790 1012 0 2025-01-29 12:01:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 coredns-7db6d8ff4d-r59h8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib40e5cbce4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.088 [INFO][6237] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.171 [INFO][6248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" HandleID="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.199 [INFO][6248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" HandleID="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376cb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b5939ece28", "pod":"coredns-7db6d8ff4d-r59h8", "timestamp":"2025-01-29 12:02:46.171773126 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.200 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.200 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.200 [INFO][6248] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.202 [INFO][6248] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.210 [INFO][6248] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.215 [INFO][6248] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.219 [INFO][6248] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.225 [INFO][6248] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.226 [INFO][6248] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.228 [INFO][6248] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35 Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.237 [INFO][6248] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.257 [INFO][6248] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.133/26] block=192.168.115.128/26 handle="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.257 [INFO][6248] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.133/26] handle="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.258 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:46.390126 containerd[1818]: 2025-01-29 12:02:46.258 [INFO][6248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.133/26] IPv6=[] ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" HandleID="k8s-pod-network.ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.262 [INFO][6237] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8277e1b7-9bd3-41a3-9974-0a547b3a9790", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"coredns-7db6d8ff4d-r59h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib40e5cbce4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.263 [INFO][6237] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.133/32] ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.263 [INFO][6237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib40e5cbce4b ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.271 [INFO][6237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.271 [INFO][6237] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8277e1b7-9bd3-41a3-9974-0a547b3a9790", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35", Pod:"coredns-7db6d8ff4d-r59h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib40e5cbce4b", MAC:"5a:fb:b6:48:d7:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:46.391287 containerd[1818]: 2025-01-29 12:02:46.387 [INFO][6237] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r59h8" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:02:46.440542 containerd[1818]: time="2025-01-29T12:02:46.439019384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:46.440542 containerd[1818]: time="2025-01-29T12:02:46.439102786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:46.440542 containerd[1818]: time="2025-01-29T12:02:46.439140086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:46.440542 containerd[1818]: time="2025-01-29T12:02:46.439237489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:46.508781 containerd[1818]: time="2025-01-29T12:02:46.508703289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r59h8,Uid:8277e1b7-9bd3-41a3-9974-0a547b3a9790,Namespace:kube-system,Attempt:1,} returns sandbox id \"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35\"" Jan 29 12:02:46.512397 containerd[1818]: time="2025-01-29T12:02:46.512362174Z" level=info msg="CreateContainer within sandbox \"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:46.544693 containerd[1818]: time="2025-01-29T12:02:46.544534715Z" level=info msg="CreateContainer within sandbox \"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ba44d7f4effa32335b3c24365ea4fe035c6426813928f518a11679ffac4c931\"" Jan 29 12:02:46.546615 containerd[1818]: time="2025-01-29T12:02:46.546011549Z" level=info msg="StartContainer for \"1ba44d7f4effa32335b3c24365ea4fe035c6426813928f518a11679ffac4c931\"" Jan 29 12:02:46.611745 systemd[1]: run-netns-cni\x2d5affc11d\x2de7df\x2d4269\x2d7567\x2d4f4d41695536.mount: Deactivated successfully. Jan 29 12:02:46.628293 containerd[1818]: time="2025-01-29T12:02:46.628175642Z" level=info msg="StartContainer for \"1ba44d7f4effa32335b3c24365ea4fe035c6426813928f518a11679ffac4c931\" returns successfully" Jan 29 12:02:47.145030 kubelet[3458]: I0129 12:02:47.143662 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r59h8" podStartSLOduration=84.14363612 podStartE2EDuration="1m24.14363612s" podCreationTimestamp="2025-01-29 12:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:47.142494594 +0000 UTC m=+97.134810531" watchObservedRunningTime="2025-01-29 12:02:47.14363612 +0000 UTC m=+97.135952157" Jan 29 12:02:47.671325 containerd[1818]: time="2025-01-29T12:02:47.670943271Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.738 [INFO][6401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.738 [INFO][6401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" iface="eth0" netns="/var/run/netns/cni-17aa06c6-14d9-5af2-5b35-6863785aa413" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.739 [INFO][6401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" iface="eth0" netns="/var/run/netns/cni-17aa06c6-14d9-5af2-5b35-6863785aa413" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.740 [INFO][6401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" iface="eth0" netns="/var/run/netns/cni-17aa06c6-14d9-5af2-5b35-6863785aa413" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.740 [INFO][6401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.740 [INFO][6401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.762 [INFO][6408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.762 [INFO][6408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.762 [INFO][6408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.767 [WARNING][6408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.767 [INFO][6408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.768 [INFO][6408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:47.771007 containerd[1818]: 2025-01-29 12:02:47.769 [INFO][6401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:02:47.771916 containerd[1818]: time="2025-01-29T12:02:47.771743293Z" level=info msg="TearDown network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" successfully" Jan 29 12:02:47.771916 containerd[1818]: time="2025-01-29T12:02:47.771784494Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" returns successfully" Jan 29 12:02:47.772850 containerd[1818]: time="2025-01-29T12:02:47.772819818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjbtv,Uid:b8235b86-2d3c-40e3-bcdb-97985970fdbe,Namespace:calico-system,Attempt:1,}" Jan 29 12:02:47.779579 systemd[1]: run-netns-cni\x2d17aa06c6\x2d14d9\x2d5af2\x2d5b35\x2d6863785aa413.mount: Deactivated successfully. Jan 29 12:02:47.953518 systemd-networkd[1392]: calidbc91f44e1f: Link UP Jan 29 12:02:47.955621 systemd-networkd[1392]: calidbc91f44e1f: Gained carrier Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.859 [INFO][6415] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0 csi-node-driver- calico-system b8235b86-2d3c-40e3-bcdb-97985970fdbe 1033 0 2025-01-29 12:01:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-b5939ece28 csi-node-driver-qjbtv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidbc91f44e1f [] []}} ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.859 [INFO][6415] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.886 [INFO][6426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" HandleID="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.897 [INFO][6426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" HandleID="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b5939ece28", "pod":"csi-node-driver-qjbtv", "timestamp":"2025-01-29 12:02:47.886330434 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b5939ece28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.897 [INFO][6426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.897 [INFO][6426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.897 [INFO][6426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b5939ece28' Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.899 [INFO][6426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.903 [INFO][6426] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.907 [INFO][6426] ipam/ipam.go 489: Trying affinity for 192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.908 [INFO][6426] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.911 [INFO][6426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.911 [INFO][6426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.912 [INFO][6426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.921 [INFO][6426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.948 [INFO][6426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.134/26] block=192.168.115.128/26 handle="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.948 [INFO][6426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.134/26] handle="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" host="ci-4081.3.0-a-b5939ece28" Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.948 [INFO][6426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:47.977973 containerd[1818]: 2025-01-29 12:02:47.948 [INFO][6426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.134/26] IPv6=[] ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" HandleID="k8s-pod-network.383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.950 [INFO][6415] cni-plugin/k8s.go 386: Populated endpoint ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8235b86-2d3c-40e3-bcdb-97985970fdbe", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"", Pod:"csi-node-driver-qjbtv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbc91f44e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.951 [INFO][6415] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.134/32] ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.951 [INFO][6415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbc91f44e1f ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.955 [INFO][6415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.956 [INFO][6415] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8235b86-2d3c-40e3-bcdb-97985970fdbe", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb", Pod:"csi-node-driver-qjbtv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbc91f44e1f", MAC:"e6:20:59:4f:77:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:47.979107 containerd[1818]: 2025-01-29 12:02:47.974 [INFO][6415] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb" Namespace="calico-system" Pod="csi-node-driver-qjbtv" WorkloadEndpoint="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:02:48.015400 containerd[1818]: time="2025-01-29T12:02:48.014750893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:48.015671 containerd[1818]: time="2025-01-29T12:02:48.015371707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:48.015671 containerd[1818]: time="2025-01-29T12:02:48.015388007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:48.015671 containerd[1818]: time="2025-01-29T12:02:48.015552511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:48.076291 containerd[1818]: time="2025-01-29T12:02:48.076244510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjbtv,Uid:b8235b86-2d3c-40e3-bcdb-97985970fdbe,Namespace:calico-system,Attempt:1,} returns sandbox id \"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb\"" Jan 29 12:02:48.078774 containerd[1818]: time="2025-01-29T12:02:48.078556463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:02:48.095604 systemd-networkd[1392]: calib40e5cbce4b: Gained IPv6LL Jan 29 12:02:49.627717 containerd[1818]: time="2025-01-29T12:02:49.625360626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:49.627717 containerd[1818]: time="2025-01-29T12:02:49.627155268Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:49.630481 containerd[1818]: time="2025-01-29T12:02:49.628111390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:02:49.631798 containerd[1818]: time="2025-01-29T12:02:49.631045358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:49.632053 containerd[1818]: time="2025-01-29T12:02:49.632014580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.553423516s" Jan 29 12:02:49.632186 containerd[1818]: time="2025-01-29T12:02:49.632052481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:02:49.637374 containerd[1818]: time="2025-01-29T12:02:49.637333504Z" level=info msg="CreateContainer within sandbox \"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:02:49.689954 containerd[1818]: time="2025-01-29T12:02:49.689718821Z" level=info msg="CreateContainer within sandbox \"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"001f06f9869592dd703235647dd1dbc0d3b53744f71b11cf27895a936c520a73\"" Jan 29 12:02:49.691573 containerd[1818]: time="2025-01-29T12:02:49.691498862Z" level=info msg="StartContainer for \"001f06f9869592dd703235647dd1dbc0d3b53744f71b11cf27895a936c520a73\"" Jan 29 12:02:49.740537 systemd[1]: run-containerd-runc-k8s.io-001f06f9869592dd703235647dd1dbc0d3b53744f71b11cf27895a936c520a73-runc.j1vFPm.mount: Deactivated successfully. Jan 29 12:02:49.758809 systemd-networkd[1392]: calidbc91f44e1f: Gained IPv6LL Jan 29 12:02:49.778374 containerd[1818]: time="2025-01-29T12:02:49.778331780Z" level=info msg="StartContainer for \"001f06f9869592dd703235647dd1dbc0d3b53744f71b11cf27895a936c520a73\" returns successfully" Jan 29 12:02:49.781012 containerd[1818]: time="2025-01-29T12:02:49.780790737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:02:51.316975 containerd[1818]: time="2025-01-29T12:02:51.316913124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.320928 containerd[1818]: time="2025-01-29T12:02:51.320797714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:02:51.324635 containerd[1818]: time="2025-01-29T12:02:51.324299495Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.328738 containerd[1818]: time="2025-01-29T12:02:51.328699998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.329368 containerd[1818]: time="2025-01-29T12:02:51.329329912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.548488874s" Jan 29 12:02:51.329458 containerd[1818]: time="2025-01-29T12:02:51.329375313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:02:51.333640 containerd[1818]: time="2025-01-29T12:02:51.333602411Z" level=info msg="CreateContainer within sandbox \"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:02:51.380085 containerd[1818]: time="2025-01-29T12:02:51.380031790Z" level=info msg="CreateContainer within sandbox \"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2c854b446e213c07b8692b604017533e5a41ed5788b970e4049a0bbbe6af2dc0\"" Jan 29 12:02:51.381629 containerd[1818]: time="2025-01-29T12:02:51.381536525Z" level=info msg="StartContainer for \"2c854b446e213c07b8692b604017533e5a41ed5788b970e4049a0bbbe6af2dc0\"" Jan 29 12:02:51.463514 containerd[1818]: time="2025-01-29T12:02:51.463448828Z" level=info msg="StartContainer for \"2c854b446e213c07b8692b604017533e5a41ed5788b970e4049a0bbbe6af2dc0\" returns successfully" Jan 29 12:02:51.858908 kubelet[3458]: I0129 12:02:51.858869 3458 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:02:51.858908 kubelet[3458]: I0129 12:02:51.858906 3458 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:03:03.617045 kubelet[3458]: I0129 12:03:03.614238 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qjbtv" podStartSLOduration=88.361788993 podStartE2EDuration="1m31.61421598s" podCreationTimestamp="2025-01-29 12:01:32 +0000 UTC" firstStartedPulling="2025-01-29 12:02:48.078113753 +0000 UTC m=+98.070429690" lastFinishedPulling="2025-01-29 12:02:51.33054064 +0000 UTC m=+101.322856677" observedRunningTime="2025-01-29 12:02:52.175445069 +0000 UTC m=+102.167761006" watchObservedRunningTime="2025-01-29 12:03:03.61421598 +0000 UTC m=+113.606531917" Jan 29 12:03:10.683792 containerd[1818]: time="2025-01-29T12:03:10.683560653Z" level=info msg="StopPodSandbox for \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\"" Jan 29 12:03:10.683792 containerd[1818]: time="2025-01-29T12:03:10.683735357Z" level=info msg="TearDown network for sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" successfully" Jan 29 12:03:10.683792 containerd[1818]: time="2025-01-29T12:03:10.683753857Z" level=info msg="StopPodSandbox for \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" returns successfully" Jan 29 12:03:10.684943 containerd[1818]: time="2025-01-29T12:03:10.684902284Z" level=info msg="RemovePodSandbox for \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\"" Jan 29 12:03:10.684943 containerd[1818]: time="2025-01-29T12:03:10.684939484Z" level=info msg="Forcibly stopping sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\"" Jan 29 12:03:10.685104 containerd[1818]: time="2025-01-29T12:03:10.685019086Z" level=info msg="TearDown network for sandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" successfully" Jan 29 12:03:10.708363 containerd[1818]: time="2025-01-29T12:03:10.708288620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:10.708576 containerd[1818]: time="2025-01-29T12:03:10.708395823Z" level=info msg="RemovePodSandbox \"cddacf6ae58e6121eba487770b9c3736f25d8c247db2d5b75edf0c2e13dbf088\" returns successfully" Jan 29 12:03:10.709497 containerd[1818]: time="2025-01-29T12:03:10.709108239Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.777 [WARNING][6629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4f2484-ef71-4219-a544-c54173b13527", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee", Pod:"calico-apiserver-8cd77c95d-mx8dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidec73df0337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.781 [INFO][6629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.781 [INFO][6629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" iface="eth0" netns="" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.781 [INFO][6629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.781 [INFO][6629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.828 [INFO][6636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.829 [INFO][6636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.829 [INFO][6636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.834 [WARNING][6636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.834 [INFO][6636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.836 [INFO][6636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:10.838921 containerd[1818]: 2025-01-29 12:03:10.837 [INFO][6629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.839838 containerd[1818]: time="2025-01-29T12:03:10.838969820Z" level=info msg="TearDown network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" successfully" Jan 29 12:03:10.839838 containerd[1818]: time="2025-01-29T12:03:10.839007921Z" level=info msg="StopPodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" returns successfully" Jan 29 12:03:10.839838 containerd[1818]: time="2025-01-29T12:03:10.839643336Z" level=info msg="RemovePodSandbox for \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:03:10.839838 containerd[1818]: time="2025-01-29T12:03:10.839690537Z" level=info msg="Forcibly stopping sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\"" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.886 [WARNING][6654] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4f2484-ef71-4219-a544-c54173b13527", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada97f90d243c6d71d83c482b6cc680d744a63bbca58d579440cd40aa18e92ee", Pod:"calico-apiserver-8cd77c95d-mx8dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidec73df0337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.887 [INFO][6654] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.887 [INFO][6654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" iface="eth0" netns="" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.887 [INFO][6654] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.887 [INFO][6654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.913 [INFO][6660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.913 [INFO][6660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.913 [INFO][6660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.924 [WARNING][6660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.924 [INFO][6660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" HandleID="k8s-pod-network.3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--mx8dw-eth0" Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.926 [INFO][6660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:10.929502 containerd[1818]: 2025-01-29 12:03:10.928 [INFO][6654] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f" Jan 29 12:03:10.930180 containerd[1818]: time="2025-01-29T12:03:10.929572700Z" level=info msg="TearDown network for sandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" successfully" Jan 29 12:03:10.938712 containerd[1818]: time="2025-01-29T12:03:10.938464404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:10.938712 containerd[1818]: time="2025-01-29T12:03:10.938564707Z" level=info msg="RemovePodSandbox \"3eb432d79381a33a3705992f1d287646aedd64950a619b86dd8ad4ea1990252f\" returns successfully" Jan 29 12:03:10.941126 containerd[1818]: time="2025-01-29T12:03:10.939436527Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:10.981 [WARNING][6678] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0", GenerateName:"calico-kube-controllers-6564b8cd7d-", Namespace:"calico-system", SelfLink:"", UID:"b50dd55e-6299-4267-acbd-6f981eec9b80", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6564b8cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7", Pod:"calico-kube-controllers-6564b8cd7d-xflv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d1c60c22f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:10.981 [INFO][6678] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:10.981 [INFO][6678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" iface="eth0" netns="" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:10.981 [INFO][6678] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:10.981 [INFO][6678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.008 [INFO][6685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.009 [INFO][6685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.009 [INFO][6685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.015 [WARNING][6685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.016 [INFO][6685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.017 [INFO][6685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.020120 containerd[1818]: 2025-01-29 12:03:11.018 [INFO][6678] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.020847 containerd[1818]: time="2025-01-29T12:03:11.020199781Z" level=info msg="TearDown network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" successfully" Jan 29 12:03:11.020847 containerd[1818]: time="2025-01-29T12:03:11.020236681Z" level=info msg="StopPodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" returns successfully" Jan 29 12:03:11.021405 containerd[1818]: time="2025-01-29T12:03:11.020987899Z" level=info msg="RemovePodSandbox for \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:03:11.021405 containerd[1818]: time="2025-01-29T12:03:11.021028000Z" level=info msg="Forcibly stopping sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\"" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.059 [WARNING][6704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0", GenerateName:"calico-kube-controllers-6564b8cd7d-", Namespace:"calico-system", SelfLink:"", UID:"b50dd55e-6299-4267-acbd-6f981eec9b80", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6564b8cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"eda3a695225ea6cae0223eae18f40ebb51046574e65938d9786b75b3c8c084b7", Pod:"calico-kube-controllers-6564b8cd7d-xflv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d1c60c22f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.059 [INFO][6704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.059 [INFO][6704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" iface="eth0" netns="" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.059 [INFO][6704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.059 [INFO][6704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.092 [INFO][6710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.093 [INFO][6710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.093 [INFO][6710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.101 [WARNING][6710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.101 [INFO][6710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" HandleID="k8s-pod-network.325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--kube--controllers--6564b8cd7d--xflv7-eth0" Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.105 [INFO][6710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.108321 containerd[1818]: 2025-01-29 12:03:11.107 [INFO][6704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f" Jan 29 12:03:11.109097 containerd[1818]: time="2025-01-29T12:03:11.108375505Z" level=info msg="TearDown network for sandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" successfully" Jan 29 12:03:11.118196 containerd[1818]: time="2025-01-29T12:03:11.118139229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:11.118369 containerd[1818]: time="2025-01-29T12:03:11.118234031Z" level=info msg="RemovePodSandbox \"325d10cb344f3874def8c342382d7219585a7090201a4e385c9e3534c9c5411f\" returns successfully" Jan 29 12:03:11.119402 containerd[1818]: time="2025-01-29T12:03:11.118956548Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.239 [WARNING][6729] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"74643b96-e115-4d67-8115-6c9ce09d0502", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529", Pod:"calico-apiserver-8cd77c95d-fll92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c06ce66266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.240 [INFO][6729] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.240 [INFO][6729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" iface="eth0" netns="" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.240 [INFO][6729] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.240 [INFO][6729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.314 [INFO][6735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.315 [INFO][6735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.315 [INFO][6735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.326 [WARNING][6735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.327 [INFO][6735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.328 [INFO][6735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.331774 containerd[1818]: 2025-01-29 12:03:11.329 [INFO][6729] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.331774 containerd[1818]: time="2025-01-29T12:03:11.331471226Z" level=info msg="TearDown network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" successfully" Jan 29 12:03:11.331774 containerd[1818]: time="2025-01-29T12:03:11.331504827Z" level=info msg="StopPodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" returns successfully" Jan 29 12:03:11.333745 containerd[1818]: time="2025-01-29T12:03:11.333700977Z" level=info msg="RemovePodSandbox for \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:03:11.333938 containerd[1818]: time="2025-01-29T12:03:11.333844681Z" level=info msg="Forcibly stopping sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\"" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.388 [WARNING][6753] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0", GenerateName:"calico-apiserver-8cd77c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"74643b96-e115-4d67-8115-6c9ce09d0502", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd77c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"56905e7b3498f3a65a3bc515f05732fc35cccb8bda446bbdfab60bf155be4529", Pod:"calico-apiserver-8cd77c95d-fll92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c06ce66266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.388 [INFO][6753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.388 [INFO][6753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" iface="eth0" netns="" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.388 [INFO][6753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.388 [INFO][6753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.412 [INFO][6759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.412 [INFO][6759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.413 [INFO][6759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.420 [WARNING][6759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.420 [INFO][6759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" HandleID="k8s-pod-network.61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Workload="ci--4081.3.0--a--b5939ece28-k8s-calico--apiserver--8cd77c95d--fll92-eth0" Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.423 [INFO][6759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.425799 containerd[1818]: 2025-01-29 12:03:11.424 [INFO][6753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864" Jan 29 12:03:11.426533 containerd[1818]: time="2025-01-29T12:03:11.425855393Z" level=info msg="TearDown network for sandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" successfully" Jan 29 12:03:11.434869 containerd[1818]: time="2025-01-29T12:03:11.434674795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:11.434869 containerd[1818]: time="2025-01-29T12:03:11.434760397Z" level=info msg="RemovePodSandbox \"61bd77e42d5595ab262ec8a0d0239104f7dc7e5d19e2e8911ddaf21c37055864\" returns successfully" Jan 29 12:03:11.435360 containerd[1818]: time="2025-01-29T12:03:11.435322510Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.479 [WARNING][6778] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8235b86-2d3c-40e3-bcdb-97985970fdbe", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb", Pod:"csi-node-driver-qjbtv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbc91f44e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.480 [INFO][6778] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.480 [INFO][6778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" iface="eth0" netns="" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.480 [INFO][6778] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.480 [INFO][6778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.504 [INFO][6784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.504 [INFO][6784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.504 [INFO][6784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.513 [WARNING][6784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.513 [INFO][6784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.515 [INFO][6784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.517770 containerd[1818]: 2025-01-29 12:03:11.516 [INFO][6778] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.521761 containerd[1818]: time="2025-01-29T12:03:11.517826704Z" level=info msg="TearDown network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" successfully" Jan 29 12:03:11.521761 containerd[1818]: time="2025-01-29T12:03:11.517861805Z" level=info msg="StopPodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" returns successfully" Jan 29 12:03:11.521761 containerd[1818]: time="2025-01-29T12:03:11.518543421Z" level=info msg="RemovePodSandbox for \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:03:11.521761 containerd[1818]: time="2025-01-29T12:03:11.518581921Z" level=info msg="Forcibly stopping sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\"" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.557 [WARNING][6802] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8235b86-2d3c-40e3-bcdb-97985970fdbe", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"383f3685a2b456c0246b44dc97489a36bb38b8420b2f64ee8d4e9d6c37d505bb", Pod:"csi-node-driver-qjbtv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbc91f44e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.557 [INFO][6802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.557 [INFO][6802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" iface="eth0" netns="" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.557 [INFO][6802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.557 [INFO][6802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.603 [INFO][6809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.604 [INFO][6809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.604 [INFO][6809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.611 [WARNING][6809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.611 [INFO][6809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" HandleID="k8s-pod-network.94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Workload="ci--4081.3.0--a--b5939ece28-k8s-csi--node--driver--qjbtv-eth0" Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.612 [INFO][6809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.614896 containerd[1818]: 2025-01-29 12:03:11.613 [INFO][6802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4" Jan 29 12:03:11.614896 containerd[1818]: time="2025-01-29T12:03:11.614870232Z" level=info msg="TearDown network for sandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" successfully" Jan 29 12:03:11.623199 containerd[1818]: time="2025-01-29T12:03:11.622104298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:11.623199 containerd[1818]: time="2025-01-29T12:03:11.622238801Z" level=info msg="RemovePodSandbox \"94e6e6c04e2c343fbe6a83f79a1b0780635f1458a609f2920c9582b95e3145f4\" returns successfully" Jan 29 12:03:11.623199 containerd[1818]: time="2025-01-29T12:03:11.622854215Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.662 [WARNING][6828] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8277e1b7-9bd3-41a3-9974-0a547b3a9790", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35", Pod:"coredns-7db6d8ff4d-r59h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib40e5cbce4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.663 [INFO][6828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.663 [INFO][6828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" iface="eth0" netns="" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.663 [INFO][6828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.663 [INFO][6828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.688 [INFO][6834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.688 [INFO][6834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.688 [INFO][6834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.694 [WARNING][6834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.694 [INFO][6834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.697 [INFO][6834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.699870 containerd[1818]: 2025-01-29 12:03:11.698 [INFO][6828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.699870 containerd[1818]: time="2025-01-29T12:03:11.699657678Z" level=info msg="TearDown network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" successfully" Jan 29 12:03:11.699870 containerd[1818]: time="2025-01-29T12:03:11.699683379Z" level=info msg="StopPodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" returns successfully" Jan 29 12:03:11.701319 containerd[1818]: time="2025-01-29T12:03:11.700971708Z" level=info msg="RemovePodSandbox for \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:03:11.701319 containerd[1818]: time="2025-01-29T12:03:11.701011409Z" level=info msg="Forcibly stopping sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\"" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.743 [WARNING][6852] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8277e1b7-9bd3-41a3-9974-0a547b3a9790", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"ada7db4d8c5d22d0142460acb11e1ada886f2ebfae9cc339cdc15f4a7325cd35", Pod:"coredns-7db6d8ff4d-r59h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib40e5cbce4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.744 [INFO][6852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.744 [INFO][6852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" iface="eth0" netns="" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.744 [INFO][6852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.744 [INFO][6852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.772 [INFO][6859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.773 [INFO][6859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.773 [INFO][6859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.778 [WARNING][6859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.779 [INFO][6859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" HandleID="k8s-pod-network.5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--r59h8-eth0" Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.780 [INFO][6859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.783501 containerd[1818]: 2025-01-29 12:03:11.781 [INFO][6852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd" Jan 29 12:03:11.783501 containerd[1818]: time="2025-01-29T12:03:11.782475779Z" level=info msg="TearDown network for sandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" successfully" Jan 29 12:03:11.790562 containerd[1818]: time="2025-01-29T12:03:11.790504564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:11.790729 containerd[1818]: time="2025-01-29T12:03:11.790594166Z" level=info msg="RemovePodSandbox \"5af00d25b67ea57e5695b7ca503533134c9ddcb4edff939122bd8383a02715fd\" returns successfully" Jan 29 12:03:11.791340 containerd[1818]: time="2025-01-29T12:03:11.791307582Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.830 [WARNING][6877] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b50e5645-b858-440a-ac13-91ab3ef24687", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5", Pod:"coredns-7db6d8ff4d-wqscz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f5b98f8786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.830 [INFO][6877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.830 [INFO][6877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" iface="eth0" netns="" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.830 [INFO][6877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.830 [INFO][6877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.851 [INFO][6883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.852 [INFO][6883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.852 [INFO][6883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.860 [WARNING][6883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.860 [INFO][6883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.862 [INFO][6883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.864772 containerd[1818]: 2025-01-29 12:03:11.863 [INFO][6877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.864772 containerd[1818]: time="2025-01-29T12:03:11.864624665Z" level=info msg="TearDown network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" successfully" Jan 29 12:03:11.864772 containerd[1818]: time="2025-01-29T12:03:11.864653266Z" level=info msg="StopPodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" returns successfully" Jan 29 12:03:11.866256 containerd[1818]: time="2025-01-29T12:03:11.865795992Z" level=info msg="RemovePodSandbox for \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:03:11.866256 containerd[1818]: time="2025-01-29T12:03:11.865826493Z" level=info msg="Forcibly stopping sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\"" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.906 [WARNING][6902] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b50e5645-b858-440a-ac13-91ab3ef24687", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b5939ece28", ContainerID:"e888660d59dbde049e1e72f1d51a4acd16ae21e800d6e9dcf1fd63967925aee5", Pod:"coredns-7db6d8ff4d-wqscz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f5b98f8786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.906 [INFO][6902] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.906 [INFO][6902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" iface="eth0" netns="" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.906 [INFO][6902] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.906 [INFO][6902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.932 [INFO][6908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.932 [INFO][6908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.933 [INFO][6908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.940 [WARNING][6908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.940 [INFO][6908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" HandleID="k8s-pod-network.99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Workload="ci--4081.3.0--a--b5939ece28-k8s-coredns--7db6d8ff4d--wqscz-eth0" Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.942 [INFO][6908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:11.946463 containerd[1818]: 2025-01-29 12:03:11.944 [INFO][6902] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c" Jan 29 12:03:11.946463 containerd[1818]: time="2025-01-29T12:03:11.945525322Z" level=info msg="TearDown network for sandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" successfully" Jan 29 12:03:11.953938 containerd[1818]: time="2025-01-29T12:03:11.953872414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:11.954154 containerd[1818]: time="2025-01-29T12:03:11.954000217Z" level=info msg="RemovePodSandbox \"99e5d0e66def4c986d6ec3695195cff0d31328808cecd2e446b57bf012939e8c\" returns successfully" Jan 29 12:03:16.348405 systemd[1]: run-containerd-runc-k8s.io-1a12ee4cd89e5499d780d16f4fc4eb20b0ead24fba32f470981e516c9a085f70-runc.mQuqwq.mount: Deactivated successfully. Jan 29 12:03:33.544709 systemd[1]: run-containerd-runc-k8s.io-842585670ff3038206dfb3a21a456cba592d60795d43420f997839d570129ce6-runc.x7z58d.mount: Deactivated successfully. Jan 29 12:03:40.522173 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:60076.service - OpenSSH per-connection server daemon (10.200.16.10:60076). Jan 29 12:03:41.173963 sshd[6990]: Accepted publickey for core from 10.200.16.10 port 60076 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:03:41.175997 sshd[6990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:41.181170 systemd-logind[1789]: New session 10 of user core. Jan 29 12:03:41.186803 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:03:41.711838 sshd[6990]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:41.718948 systemd-logind[1789]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:03:41.719202 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:60076.service: Deactivated successfully. Jan 29 12:03:41.726133 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:03:41.728129 systemd-logind[1789]: Removed session 10. Jan 29 12:03:46.823842 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:51542.service - OpenSSH per-connection server daemon (10.200.16.10:51542). Jan 29 12:03:47.470087 sshd[7024]: Accepted publickey for core from 10.200.16.10 port 51542 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:03:47.471993 sshd[7024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:47.478506 systemd-logind[1789]: New session 11 of user core. Jan 29 12:03:47.484131 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:03:48.000778 sshd[7024]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:48.005074 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:51542.service: Deactivated successfully. Jan 29 12:03:48.012081 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:03:48.012142 systemd-logind[1789]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:03:48.013860 systemd-logind[1789]: Removed session 11. Jan 29 12:03:53.114906 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:51548.service - OpenSSH per-connection server daemon (10.200.16.10:51548). Jan 29 12:03:53.758688 sshd[7039]: Accepted publickey for core from 10.200.16.10 port 51548 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:03:53.760594 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:53.766476 systemd-logind[1789]: New session 12 of user core. Jan 29 12:03:53.772800 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:03:54.282174 sshd[7039]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:54.286026 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:51548.service: Deactivated successfully. Jan 29 12:03:54.290116 systemd-logind[1789]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:03:54.293565 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:03:54.299153 systemd-logind[1789]: Removed session 12. Jan 29 12:03:54.402227 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:51552.service - OpenSSH per-connection server daemon (10.200.16.10:51552). Jan 29 12:03:55.062121 sshd[7053]: Accepted publickey for core from 10.200.16.10 port 51552 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:03:55.063783 sshd[7053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:55.069821 systemd-logind[1789]: New session 13 of user core. Jan 29 12:03:55.072941 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:03:55.632793 sshd[7053]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:55.637326 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:51552.service: Deactivated successfully. Jan 29 12:03:55.642867 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:03:55.643840 systemd-logind[1789]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:03:55.644949 systemd-logind[1789]: Removed session 13. Jan 29 12:03:55.744822 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:51568.service - OpenSSH per-connection server daemon (10.200.16.10:51568). Jan 29 12:03:56.400225 sshd[7067]: Accepted publickey for core from 10.200.16.10 port 51568 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:03:56.402086 sshd[7067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:56.407875 systemd-logind[1789]: New session 14 of user core. Jan 29 12:03:56.414736 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:03:56.922666 sshd[7067]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:56.927099 systemd-logind[1789]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:03:56.929050 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:51568.service: Deactivated successfully. Jan 29 12:03:56.934525 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:03:56.935830 systemd-logind[1789]: Removed session 14. Jan 29 12:04:02.034200 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:43112.service - OpenSSH per-connection server daemon (10.200.16.10:43112). Jan 29 12:04:02.689958 sshd[7087]: Accepted publickey for core from 10.200.16.10 port 43112 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:02.691753 sshd[7087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:02.696950 systemd-logind[1789]: New session 15 of user core. Jan 29 12:04:02.702854 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:04:03.218781 sshd[7087]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:03.222872 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:43112.service: Deactivated successfully. Jan 29 12:04:03.227679 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:04:03.229340 systemd-logind[1789]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:04:03.230374 systemd-logind[1789]: Removed session 15. Jan 29 12:04:08.327755 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:49270.service - OpenSSH per-connection server daemon (10.200.16.10:49270). Jan 29 12:04:08.976548 sshd[7129]: Accepted publickey for core from 10.200.16.10 port 49270 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:08.978390 sshd[7129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:08.985505 systemd-logind[1789]: New session 16 of user core. Jan 29 12:04:08.992295 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:04:09.505104 sshd[7129]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:09.510808 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:49270.service: Deactivated successfully. Jan 29 12:04:09.517937 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:04:09.519047 systemd-logind[1789]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:04:09.520345 systemd-logind[1789]: Removed session 16. Jan 29 12:04:14.618747 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:49278.service - OpenSSH per-connection server daemon (10.200.16.10:49278). Jan 29 12:04:15.276748 sshd[7150]: Accepted publickey for core from 10.200.16.10 port 49278 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:15.278600 sshd[7150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:15.285480 systemd-logind[1789]: New session 17 of user core. Jan 29 12:04:15.289834 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:04:15.831580 sshd[7150]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:15.836414 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:49278.service: Deactivated successfully. Jan 29 12:04:15.843121 systemd-logind[1789]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:04:15.847216 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:04:15.849660 systemd-logind[1789]: Removed session 17. Jan 29 12:04:15.944171 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:40518.service - OpenSSH per-connection server daemon (10.200.16.10:40518). Jan 29 12:04:16.587557 sshd[7176]: Accepted publickey for core from 10.200.16.10 port 40518 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:16.589719 sshd[7176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:16.594558 systemd-logind[1789]: New session 18 of user core. Jan 29 12:04:16.600706 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:04:17.197282 sshd[7176]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:17.202978 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:40518.service: Deactivated successfully. Jan 29 12:04:17.207276 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:04:17.208258 systemd-logind[1789]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:04:17.209309 systemd-logind[1789]: Removed session 18. Jan 29 12:04:17.317837 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:40524.service - OpenSSH per-connection server daemon (10.200.16.10:40524). Jan 29 12:04:17.968530 sshd[7207]: Accepted publickey for core from 10.200.16.10 port 40524 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:17.971899 sshd[7207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:17.977500 systemd-logind[1789]: New session 19 of user core. Jan 29 12:04:17.983096 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:04:20.348734 sshd[7207]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:20.355085 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:40524.service: Deactivated successfully. Jan 29 12:04:20.355188 systemd-logind[1789]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:04:20.359587 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:04:20.360816 systemd-logind[1789]: Removed session 19. Jan 29 12:04:20.462259 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:40528.service - OpenSSH per-connection server daemon (10.200.16.10:40528). Jan 29 12:04:21.106700 sshd[7226]: Accepted publickey for core from 10.200.16.10 port 40528 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:21.109702 sshd[7226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:21.118054 systemd-logind[1789]: New session 20 of user core. Jan 29 12:04:21.122547 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:04:21.779225 sshd[7226]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:21.784718 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:40528.service: Deactivated successfully. Jan 29 12:04:21.793837 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:04:21.795103 systemd-logind[1789]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:04:21.796371 systemd-logind[1789]: Removed session 20. Jan 29 12:04:21.897728 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:40544.service - OpenSSH per-connection server daemon (10.200.16.10:40544). Jan 29 12:04:22.548150 sshd[7239]: Accepted publickey for core from 10.200.16.10 port 40544 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:22.550338 sshd[7239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:22.555934 systemd-logind[1789]: New session 21 of user core. Jan 29 12:04:22.560963 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:04:23.087283 sshd[7239]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:23.091857 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:40544.service: Deactivated successfully. Jan 29 12:04:23.100207 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:04:23.103304 systemd-logind[1789]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:04:23.107950 systemd-logind[1789]: Removed session 21. Jan 29 12:04:28.202854 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:58668.service - OpenSSH per-connection server daemon (10.200.16.10:58668). Jan 29 12:04:28.862559 sshd[7255]: Accepted publickey for core from 10.200.16.10 port 58668 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:28.868761 sshd[7255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:28.874161 systemd-logind[1789]: New session 22 of user core. Jan 29 12:04:28.880730 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:04:29.392998 sshd[7255]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:29.397329 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:58668.service: Deactivated successfully. Jan 29 12:04:29.398854 systemd-logind[1789]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:04:29.404363 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:04:29.407472 systemd-logind[1789]: Removed session 22. Jan 29 12:04:34.507134 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:58682.service - OpenSSH per-connection server daemon (10.200.16.10:58682). Jan 29 12:04:35.161107 sshd[7293]: Accepted publickey for core from 10.200.16.10 port 58682 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:35.163042 sshd[7293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:35.170510 systemd-logind[1789]: New session 23 of user core. Jan 29 12:04:35.178238 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:04:35.687703 sshd[7293]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:35.693268 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:58682.service: Deactivated successfully. Jan 29 12:04:35.693554 systemd-logind[1789]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:04:35.700584 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:04:35.701836 systemd-logind[1789]: Removed session 23. Jan 29 12:04:40.800832 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:47866.service - OpenSSH per-connection server daemon (10.200.16.10:47866). Jan 29 12:04:41.444582 sshd[7327]: Accepted publickey for core from 10.200.16.10 port 47866 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:41.446896 sshd[7327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:41.452368 systemd-logind[1789]: New session 24 of user core. Jan 29 12:04:41.457736 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:04:41.973140 sshd[7327]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:41.977636 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:47866.service: Deactivated successfully. Jan 29 12:04:41.985191 systemd-logind[1789]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:04:41.986207 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:04:41.988042 systemd-logind[1789]: Removed session 24. Jan 29 12:04:47.085182 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:35164.service - OpenSSH per-connection server daemon (10.200.16.10:35164). Jan 29 12:04:47.730577 sshd[7360]: Accepted publickey for core from 10.200.16.10 port 35164 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:47.732212 sshd[7360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:47.741647 systemd-logind[1789]: New session 25 of user core. Jan 29 12:04:47.748966 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:04:48.267025 sshd[7360]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:48.272709 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:35164.service: Deactivated successfully. Jan 29 12:04:48.278675 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:04:48.279695 systemd-logind[1789]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:04:48.281305 systemd-logind[1789]: Removed session 25. Jan 29 12:04:53.378751 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:35180.service - OpenSSH per-connection server daemon (10.200.16.10:35180). Jan 29 12:04:54.029132 sshd[7377]: Accepted publickey for core from 10.200.16.10 port 35180 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:54.030979 sshd[7377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:54.036298 systemd-logind[1789]: New session 26 of user core. Jan 29 12:04:54.041309 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:04:54.564089 sshd[7377]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:54.568537 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:35180.service: Deactivated successfully. Jan 29 12:04:54.575664 systemd-logind[1789]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:04:54.576310 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:04:54.581232 systemd-logind[1789]: Removed session 26. Jan 29 12:04:59.679154 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:59002.service - OpenSSH per-connection server daemon (10.200.16.10:59002). Jan 29 12:05:00.330919 sshd[7392]: Accepted publickey for core from 10.200.16.10 port 59002 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:00.332704 sshd[7392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:00.338465 systemd-logind[1789]: New session 27 of user core. Jan 29 12:05:00.344771 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:05:00.850903 sshd[7392]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:00.856838 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:59002.service: Deactivated successfully. Jan 29 12:05:00.862599 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:05:00.863818 systemd-logind[1789]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:05:00.864982 systemd-logind[1789]: Removed session 27. Jan 29 12:05:05.963273 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.16.10:50958.service - OpenSSH per-connection server daemon (10.200.16.10:50958). Jan 29 12:05:06.619779 sshd[7432]: Accepted publickey for core from 10.200.16.10 port 50958 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:06.620626 sshd[7432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:06.625045 systemd-logind[1789]: New session 28 of user core. Jan 29 12:05:06.629687 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:05:07.136890 sshd[7432]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:07.141228 systemd[1]: sshd@25-10.200.8.4:22-10.200.16.10:50958.service: Deactivated successfully. Jan 29 12:05:07.148269 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:05:07.149337 systemd-logind[1789]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:05:07.150532 systemd-logind[1789]: Removed session 28. Jan 29 12:05:12.256760 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.16.10:50974.service - OpenSSH per-connection server daemon (10.200.16.10:50974). Jan 29 12:05:12.914312 sshd[7448]: Accepted publickey for core from 10.200.16.10 port 50974 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:12.916454 sshd[7448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:12.923009 systemd-logind[1789]: New session 29 of user core. Jan 29 12:05:12.933876 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 12:05:13.435958 sshd[7448]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:13.439159 systemd[1]: sshd@26-10.200.8.4:22-10.200.16.10:50974.service: Deactivated successfully. Jan 29 12:05:13.445706 systemd-logind[1789]: Session 29 logged out. Waiting for processes to exit. Jan 29 12:05:13.446245 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 12:05:13.447624 systemd-logind[1789]: Removed session 29.