Jan 29 12:01:25.081556 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:01:25.081586 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.081597 kernel: BIOS-provided physical RAM map: Jan 29 12:01:25.081606 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 12:01:25.081612 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 12:01:25.081618 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 12:01:25.081628 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 12:01:25.081637 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 12:01:25.081646 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 12:01:25.081653 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 12:01:25.081660 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 12:01:25.081668 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 12:01:25.081674 kernel: printk: bootconsole [earlyser0] enabled Jan 29 12:01:25.081693 kernel: NX (Execute Disable) protection: active Jan 29 12:01:25.081706 kernel: APIC: Static calls initialized Jan 29 12:01:25.081721 kernel: efi: EFI v2.7 by Microsoft Jan 29 12:01:25.081731 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 29 12:01:25.081741 kernel: SMBIOS 3.1.0 present. Jan 29 12:01:25.081749 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 12:01:25.081759 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 12:01:25.081768 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 12:01:25.081778 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 12:01:25.081785 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 12:01:25.081793 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 12:01:25.081804 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 12:01:25.081812 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:25.081821 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:25.081829 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 12:01:25.081840 kernel: tsc: Detected 2593.908 MHz processor Jan 29 12:01:25.081848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:01:25.081858 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:01:25.081865 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 12:01:25.081875 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 12:01:25.081885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:01:25.081900 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 12:01:25.081907 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 12:01:25.081915 kernel: Using GB pages for direct mapping Jan 29 12:01:25.081924 kernel: Secure boot disabled Jan 29 12:01:25.081931 kernel: ACPI: Early table checksum verification disabled Jan 29 12:01:25.081941 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 12:01:25.081952 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.081965 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.081972 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 12:01:25.081983 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 12:01:25.081991 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082001 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082009 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082022 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082029 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082040 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082047 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082057 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 12:01:25.082065 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 12:01:25.082074 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 12:01:25.082090 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 12:01:25.082103 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 12:01:25.082113 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 12:01:25.082124 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 12:01:25.082132 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 12:01:25.082142 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 12:01:25.082151 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 12:01:25.082160 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:01:25.082170 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:01:25.082177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 12:01:25.082190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 12:01:25.082197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 12:01:25.082208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 12:01:25.082216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 12:01:25.082226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 12:01:25.082234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 12:01:25.082245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 12:01:25.082252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 12:01:25.082262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 12:01:25.082273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 12:01:25.082283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 12:01:25.082291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 12:01:25.082300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 12:01:25.082310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 12:01:25.082318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 12:01:25.082328 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 12:01:25.082336 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 12:01:25.082346 kernel: Zone ranges: Jan 29 12:01:25.082356 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:01:25.082367 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:01:25.082374 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:25.082385 kernel: Movable zone start for each node Jan 29 12:01:25.082392 kernel: Early memory node ranges Jan 29 12:01:25.082403 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 12:01:25.082411 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 12:01:25.082420 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 12:01:25.082429 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:25.082447 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 12:01:25.082456 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:01:25.082466 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 12:01:25.082476 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 12:01:25.082486 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 12:01:25.082495 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 12:01:25.082504 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:01:25.082514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:01:25.082522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:01:25.082535 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 12:01:25.082542 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:01:25.082553 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 12:01:25.082561 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 12:01:25.082568 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:01:25.082576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:01:25.082584 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:01:25.082591 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:01:25.082599 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:01:25.082608 kernel: Hyper-V: PV spinlocks enabled Jan 29 12:01:25.082616 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:01:25.082624 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.082632 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:01:25.082639 kernel: random: crng init done Jan 29 12:01:25.082647 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:01:25.082654 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:01:25.082662 kernel: Fallback order for Node 0: 0 Jan 29 12:01:25.082693 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 12:01:25.082711 kernel: Policy zone: Normal Jan 29 12:01:25.082722 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:01:25.082731 kernel: software IO TLB: area num 2. Jan 29 12:01:25.082743 kernel: Memory: 8077012K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310188K reserved, 0K cma-reserved) Jan 29 12:01:25.082751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:01:25.082759 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:01:25.082767 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:01:25.082775 kernel: Dynamic Preempt: voluntary Jan 29 12:01:25.082783 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:01:25.082798 kernel: rcu: RCU event tracing is enabled. Jan 29 12:01:25.082809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:01:25.082820 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:01:25.082828 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:01:25.082836 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:01:25.082844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:01:25.082857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:01:25.082866 kernel: Using NULL legacy PIC Jan 29 12:01:25.082875 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 12:01:25.082885 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:01:25.082893 kernel: Console: colour dummy device 80x25 Jan 29 12:01:25.082904 kernel: printk: console [tty1] enabled Jan 29 12:01:25.082912 kernel: printk: console [ttyS0] enabled Jan 29 12:01:25.082923 kernel: printk: bootconsole [earlyser0] disabled Jan 29 12:01:25.082931 kernel: ACPI: Core revision 20230628 Jan 29 12:01:25.082942 kernel: Failed to register legacy timer interrupt Jan 29 12:01:25.082953 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:01:25.082964 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 12:01:25.082972 kernel: Hyper-V: Using IPI hypercalls Jan 29 12:01:25.082983 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 12:01:25.082991 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 12:01:25.083007 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 12:01:25.083017 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 12:01:25.083027 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 12:01:25.083038 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 12:01:25.083052 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Jan 29 12:01:25.083060 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:01:25.083070 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:01:25.083078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:01:25.083089 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:01:25.083097 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:01:25.083108 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:01:25.083116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:01:25.083126 kernel: RETBleed: Vulnerable Jan 29 12:01:25.083138 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:01:25.083147 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:25.083156 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:25.083164 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:01:25.083172 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:01:25.083181 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:01:25.083197 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:01:25.083205 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:01:25.083213 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:01:25.083224 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:01:25.083232 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:01:25.083245 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 12:01:25.083254 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 12:01:25.083264 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 12:01:25.083273 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 12:01:25.083283 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:01:25.083292 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:01:25.083300 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:01:25.083311 kernel: landlock: Up and running. Jan 29 12:01:25.083319 kernel: SELinux: Initializing. Jan 29 12:01:25.083328 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.083338 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.083346 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:01:25.083359 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083368 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083773 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083794 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:01:25.083810 kernel: signal: max sigframe size: 3632 Jan 29 12:01:25.083825 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:01:25.083841 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:01:25.083857 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:01:25.083872 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:01:25.083892 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:01:25.083907 kernel: .... node #0, CPUs: #1 Jan 29 12:01:25.083923 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 12:01:25.083940 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:01:25.083954 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:01:25.083969 kernel: smpboot: Max logical packages: 1 Jan 29 12:01:25.083984 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 29 12:01:25.083999 kernel: devtmpfs: initialized Jan 29 12:01:25.084017 kernel: x86/mm: Memory block size: 128MB Jan 29 12:01:25.084032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 12:01:25.084048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:01:25.084063 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:01:25.084078 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:01:25.084093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:01:25.084108 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:01:25.084123 kernel: audit: type=2000 audit(1738152083.027:1): state=initialized audit_enabled=0 res=1 Jan 29 12:01:25.084138 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:01:25.084155 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:01:25.084170 kernel: cpuidle: using governor menu Jan 29 12:01:25.084185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:01:25.084199 kernel: dca service started, version 1.12.1 Jan 29 12:01:25.084215 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 12:01:25.084234 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:01:25.084249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:01:25.084264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:01:25.084279 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:01:25.084296 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:01:25.084312 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:01:25.084327 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:01:25.084342 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:01:25.084357 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:01:25.084372 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:01:25.084387 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:01:25.084402 kernel: ACPI: Interpreter enabled Jan 29 12:01:25.084417 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:01:25.084435 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:01:25.084450 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:01:25.084465 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:01:25.084480 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 12:01:25.084495 kernel: iommu: Default domain type: Translated Jan 29 12:01:25.084511 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:01:25.084526 kernel: efivars: Registered efivars operations Jan 29 12:01:25.084540 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:01:25.084555 kernel: PCI: System does not support PCI Jan 29 12:01:25.084572 kernel: vgaarb: loaded Jan 29 12:01:25.084587 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 12:01:25.084602 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:01:25.084617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:01:25.084633 kernel: pnp: PnP ACPI init Jan 29 12:01:25.084647 kernel: pnp: PnP ACPI: found 3 devices Jan 29 12:01:25.084662 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:01:25.088370 kernel: NET: Registered PF_INET protocol family Jan 29 12:01:25.088406 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:01:25.088429 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:01:25.088443 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:01:25.088458 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:01:25.088474 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:01:25.088490 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:01:25.088505 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.088520 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.088535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:01:25.088551 kernel: NET: Registered PF_XDP protocol family Jan 29 12:01:25.088569 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:01:25.088585 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:01:25.088600 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Jan 29 12:01:25.088615 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:01:25.088631 kernel: Initialise system trusted keyrings Jan 29 12:01:25.088645 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:01:25.088661 kernel: Key type asymmetric registered Jan 29 12:01:25.088676 kernel: Asymmetric key parser 'x509' registered Jan 29 12:01:25.088710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:01:25.088725 kernel: io scheduler mq-deadline registered Jan 29 12:01:25.088737 kernel: io scheduler kyber registered Jan 29 12:01:25.088748 kernel: io scheduler bfq registered Jan 29 12:01:25.088760 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:01:25.088772 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:01:25.088784 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:01:25.088797 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:01:25.088810 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:01:25.089021 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 12:01:25.089150 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T12:01:24 UTC (1738152084) Jan 29 12:01:25.089263 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 12:01:25.089281 kernel: intel_pstate: CPU model not supported Jan 29 12:01:25.089296 kernel: efifb: probing for efifb Jan 29 12:01:25.089310 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 12:01:25.089324 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 12:01:25.089337 kernel: efifb: scrolling: redraw Jan 29 12:01:25.089353 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 12:01:25.089368 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:25.089380 kernel: fb0: EFI VGA frame buffer device Jan 29 12:01:25.089394 kernel: pstore: Using crash dump compression: deflate Jan 29 12:01:25.089408 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:01:25.089423 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:01:25.089437 kernel: Segment Routing with IPv6 Jan 29 12:01:25.089450 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:01:25.089465 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:01:25.089479 kernel: Key type dns_resolver registered Jan 29 12:01:25.089496 kernel: IPI shorthand broadcast: enabled Jan 29 12:01:25.089511 kernel: sched_clock: Marking stable (822004500, 40891100)->(1053032100, -190136500) Jan 29 12:01:25.089526 kernel: registered taskstats version 1 Jan 29 12:01:25.089539 kernel: Loading compiled-in X.509 certificates Jan 29 12:01:25.089554 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:01:25.089567 kernel: Key type .fscrypt registered Jan 29 12:01:25.089580 kernel: Key type fscrypt-provisioning registered Jan 29 12:01:25.089594 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:01:25.089611 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:01:25.089625 kernel: ima: No architecture policies found Jan 29 12:01:25.089639 kernel: clk: Disabling unused clocks Jan 29 12:01:25.089653 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:01:25.089666 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:01:25.089690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:01:25.089703 kernel: Run /init as init process Jan 29 12:01:25.089717 kernel: with arguments: Jan 29 12:01:25.089729 kernel: /init Jan 29 12:01:25.089747 kernel: with environment: Jan 29 12:01:25.089760 kernel: HOME=/ Jan 29 12:01:25.089774 kernel: TERM=linux Jan 29 12:01:25.089787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:01:25.089806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:25.089825 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:25.089839 systemd[1]: Detected architecture x86-64. Jan 29 12:01:25.089853 systemd[1]: Running in initrd. Jan 29 12:01:25.089872 systemd[1]: No hostname configured, using default hostname. Jan 29 12:01:25.089894 systemd[1]: Hostname set to . Jan 29 12:01:25.089909 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:25.089925 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:01:25.089943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:25.089957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:25.089974 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:01:25.089989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:25.090007 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:01:25.090021 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:01:25.090036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:01:25.090049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:01:25.090064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:25.090079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:25.090093 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:25.090113 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:25.090129 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:25.090145 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:25.090157 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:25.090171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:25.090184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:25.090198 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:25.090210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:25.090221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:25.090232 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:25.090241 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:25.090250 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:01:25.090259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:25.090267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:01:25.090282 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:01:25.090295 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:25.090309 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:25.090325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:25.090373 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 12:01:25.090405 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:25.090419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:25.090438 systemd-journald[176]: Journal started Jan 29 12:01:25.090485 systemd-journald[176]: Runtime Journal (/run/log/journal/7e005bff65ed42fdaab3d7f62c08dbf4) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:25.074049 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 12:01:25.098139 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:25.096399 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:01:25.105192 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:25.107093 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:25.119391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:25.121827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:25.137065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:25.142973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:25.150197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:25.165921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:25.182073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:01:25.187126 kernel: Bridge firewalling registered Jan 29 12:01:25.186373 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 12:01:25.187998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:25.195145 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:25.208830 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:01:25.214542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:25.222368 dracut-cmdline[208]: dracut-dracut-053 Jan 29 12:01:25.225908 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.253175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:25.264965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:25.314145 systemd-resolved[253]: Positive Trust Anchors: Jan 29 12:01:25.314167 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:25.322067 kernel: SCSI subsystem initialized Jan 29 12:01:25.314239 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:25.339885 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 29 12:01:25.349361 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:01:25.340957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:25.344457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:25.360703 kernel: iscsi: registered transport (tcp) Jan 29 12:01:25.382982 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:01:25.383091 kernel: QLogic iSCSI HBA Driver Jan 29 12:01:25.419773 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:25.424995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:01:25.456541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:01:25.456642 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:01:25.460889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:01:25.501711 kernel: raid6: avx512x4 gen() 18421 MB/s Jan 29 12:01:25.520693 kernel: raid6: avx512x2 gen() 18133 MB/s Jan 29 12:01:25.539691 kernel: raid6: avx512x1 gen() 18213 MB/s Jan 29 12:01:25.558697 kernel: raid6: avx2x4 gen() 18039 MB/s Jan 29 12:01:25.576693 kernel: raid6: avx2x2 gen() 17893 MB/s Jan 29 12:01:25.596474 kernel: raid6: avx2x1 gen() 13719 MB/s Jan 29 12:01:25.596519 kernel: raid6: using algorithm avx512x4 gen() 18421 MB/s Jan 29 12:01:25.617973 kernel: raid6: .... xor() 6528 MB/s, rmw enabled Jan 29 12:01:25.618015 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:01:25.639711 kernel: xor: automatically using best checksumming function avx Jan 29 12:01:25.791709 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:01:25.801716 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:25.808961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:25.828674 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 29 12:01:25.835315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:25.849950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:01:25.867769 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 29 12:01:25.896532 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:25.904023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:25.945226 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:25.957952 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:01:25.994624 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:26.000545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:26.005300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:26.016699 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:26.026880 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:01:26.040708 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:01:26.063706 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:01:26.066702 kernel: AES CTR mode by8 optimization enabled Jan 29 12:01:26.067167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:26.082568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:26.082843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:26.084791 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:26.085465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:26.085652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.088285 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.114822 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 12:01:26.097206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.119892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:26.120022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.132886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.138574 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 12:01:26.145458 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 12:01:26.145509 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:01:26.151201 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:01:26.158706 kernel: PTP clock support registered Jan 29 12:01:26.173302 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 12:01:26.173408 kernel: hv_vmbus: registering driver hv_utils Jan 29 12:01:26.176752 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 12:01:26.176810 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 12:01:26.181701 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 12:01:26.663049 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 29 12:01:26.665275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.682234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:26.691925 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 12:01:26.691960 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:01:26.698640 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 12:01:26.702808 kernel: scsi host0: storvsc_host_t Jan 29 12:01:26.702892 kernel: scsi host1: storvsc_host_t Jan 29 12:01:26.710176 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 12:01:26.710302 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 12:01:26.715371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 12:01:26.719493 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 12:01:26.723488 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 12:01:26.734919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:26.753621 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 12:01:26.755903 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:01:26.755930 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 12:01:26.764711 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 12:01:26.777581 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:01:26.777790 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:01:26.777967 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 12:01:26.778143 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 12:01:26.778331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:26.778353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:01:26.895268 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: VF slot 1 added Jan 29 12:01:26.903488 kernel: hv_vmbus: registering driver hv_pci Jan 29 12:01:26.907492 kernel: hv_pci c1992161-4f22-4de8-a7f6-d10aab52a9ac: PCI VMBus probing: Using version 0x10004 Jan 29 12:01:26.949660 kernel: hv_pci c1992161-4f22-4de8-a7f6-d10aab52a9ac: PCI host bridge to bus 4f22:00 Jan 29 12:01:26.951007 kernel: pci_bus 4f22:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 12:01:26.951202 kernel: pci_bus 4f22:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 12:01:26.951347 kernel: pci 4f22:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 12:01:26.951575 kernel: pci 4f22:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:26.951753 kernel: pci 4f22:00:02.0: enabling Extended Tags Jan 29 12:01:26.951920 kernel: pci 4f22:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4f22:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 12:01:26.952099 kernel: pci_bus 4f22:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 12:01:26.952243 kernel: pci 4f22:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:27.123710 kernel: mlx5_core 4f22:00:02.0: enabling device (0000 -> 0002) Jan 29 12:01:27.377245 kernel: mlx5_core 4f22:00:02.0: firmware version: 14.30.5000 Jan 29 12:01:27.377858 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (455) Jan 29 12:01:27.377893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) Jan 29 12:01:27.377914 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: VF registering: eth1 Jan 29 12:01:27.378087 kernel: mlx5_core 4f22:00:02.0 eth1: joined to eth0 Jan 29 12:01:27.378266 kernel: mlx5_core 4f22:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:01:27.264873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 12:01:27.354204 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 12:01:27.367357 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 12:01:27.371500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 12:01:27.393215 kernel: mlx5_core 4f22:00:02.0 enP20258s1: renamed from eth1 Jan 29 12:01:27.393979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:27.409949 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:01:27.423491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:27.433240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:28.441575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:28.443074 disk-uuid[601]: The operation has completed successfully. Jan 29 12:01:28.534250 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:01:28.534370 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:01:28.549616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:01:28.553189 sh[714]: Success Jan 29 12:01:28.582978 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:01:28.821635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:01:28.834598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:01:28.837955 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:01:28.858354 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:01:28.858447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:28.861797 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:01:28.864667 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:01:28.866993 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:01:29.229663 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:01:29.235656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:01:29.244651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:01:29.249591 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:01:29.268578 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:29.274660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:29.274733 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:29.294541 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:29.306515 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:01:29.311421 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:29.316005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:01:29.324765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:01:29.356942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:29.364818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:29.386780 systemd-networkd[898]: lo: Link UP Jan 29 12:01:29.386791 systemd-networkd[898]: lo: Gained carrier Jan 29 12:01:29.389002 systemd-networkd[898]: Enumeration completed Jan 29 12:01:29.389321 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:29.393905 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:29.393910 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:29.395574 systemd[1]: Reached target network.target - Network. Jan 29 12:01:29.448492 kernel: mlx5_core 4f22:00:02.0 enP20258s1: Link up Jan 29 12:01:29.485512 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: Data path switched to VF: enP20258s1 Jan 29 12:01:29.486489 systemd-networkd[898]: enP20258s1: Link UP Jan 29 12:01:29.486681 systemd-networkd[898]: eth0: Link UP Jan 29 12:01:29.486908 systemd-networkd[898]: eth0: Gained carrier Jan 29 12:01:29.486927 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:29.493722 systemd-networkd[898]: enP20258s1: Gained carrier Jan 29 12:01:29.520539 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:30.471275 ignition[849]: Ignition 2.19.0 Jan 29 12:01:30.471287 ignition[849]: Stage: fetch-offline Jan 29 12:01:30.471332 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.471344 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.471449 ignition[849]: parsed url from cmdline: "" Jan 29 12:01:30.471453 ignition[849]: no config URL provided Jan 29 12:01:30.471460 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:30.471483 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:30.471490 ignition[849]: failed to fetch config: resource requires networking Jan 29 12:01:30.471733 ignition[849]: Ignition finished successfully Jan 29 12:01:30.485948 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:30.497801 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:01:30.513597 ignition[906]: Ignition 2.19.0 Jan 29 12:01:30.513608 ignition[906]: Stage: fetch Jan 29 12:01:30.513847 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.513863 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.514008 ignition[906]: parsed url from cmdline: "" Jan 29 12:01:30.514013 ignition[906]: no config URL provided Jan 29 12:01:30.514020 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:30.514029 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:30.514050 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 12:01:30.596851 ignition[906]: GET result: OK Jan 29 12:01:30.597000 ignition[906]: config has been read from IMDS userdata Jan 29 12:01:30.597040 ignition[906]: parsing config with SHA512: 3072c82b64a63870254e1557834f28818aebb2fb965abaa0d214ae778eb5d51547be381e33fa1105df4bd9cfb37917ddb486e814a38cba02fcdbcc2ee6f03c12 Jan 29 12:01:30.603177 unknown[906]: fetched base config from "system" Jan 29 12:01:30.603194 unknown[906]: fetched base config from "system" Jan 29 12:01:30.603605 ignition[906]: fetch: fetch complete Jan 29 12:01:30.603200 unknown[906]: fetched user config from "azure" Jan 29 12:01:30.603610 ignition[906]: fetch: fetch passed Jan 29 12:01:30.605600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:01:30.603656 ignition[906]: Ignition finished successfully Jan 29 12:01:30.620677 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:01:30.640308 ignition[912]: Ignition 2.19.0 Jan 29 12:01:30.640320 ignition[912]: Stage: kargs Jan 29 12:01:30.640567 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.643986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:01:30.640583 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.641922 ignition[912]: kargs: kargs passed Jan 29 12:01:30.641973 ignition[912]: Ignition finished successfully Jan 29 12:01:30.661782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:01:30.678865 ignition[918]: Ignition 2.19.0 Jan 29 12:01:30.678877 ignition[918]: Stage: disks Jan 29 12:01:30.681063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:01:30.679105 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.685215 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:30.679119 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.689432 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:30.680000 ignition[918]: disks: disks passed Jan 29 12:01:30.692265 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:30.680046 ignition[918]: Ignition finished successfully Jan 29 12:01:30.696916 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:30.700244 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:30.722696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:01:30.780302 systemd-fsck[926]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 12:01:30.785834 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:01:30.796736 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:01:30.893491 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:01:30.894016 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:01:30.898377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:30.941604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:30.953458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (937) Jan 29 12:01:30.951558 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:01:30.956205 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:01:30.958945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:01:30.958986 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:30.975461 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:01:30.986699 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:01:31.003782 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:31.003882 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:31.006149 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:31.012497 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:31.013957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:31.098794 systemd-networkd[898]: enP20258s1: Gained IPv6LL Jan 29 12:01:31.290649 systemd-networkd[898]: eth0: Gained IPv6LL Jan 29 12:01:31.784142 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:01:31.794237 coreos-metadata[939]: Jan 29 12:01:31.794 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:31.800178 coreos-metadata[939]: Jan 29 12:01:31.800 INFO Fetch successful Jan 29 12:01:31.802637 coreos-metadata[939]: Jan 29 12:01:31.800 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:31.811907 coreos-metadata[939]: Jan 29 12:01:31.811 INFO Fetch successful Jan 29 12:01:31.814327 coreos-metadata[939]: Jan 29 12:01:31.812 INFO wrote hostname ci-4081.3.0-a-56ab0c4267 to /sysroot/etc/hostname Jan 29 12:01:31.815730 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:31.826263 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:01:31.844085 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:01:31.865279 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:01:32.519835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:32.527568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:01:32.533667 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:01:32.542005 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:01:32.544636 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:32.579485 ignition[1055]: INFO : Ignition 2.19.0 Jan 29 12:01:32.579485 ignition[1055]: INFO : Stage: mount Jan 29 12:01:32.579485 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:32.579485 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:32.593344 ignition[1055]: INFO : mount: mount passed Jan 29 12:01:32.593344 ignition[1055]: INFO : Ignition finished successfully Jan 29 12:01:32.580977 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:01:32.584073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:01:32.595549 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:01:32.602663 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:32.617490 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1067) Jan 29 12:01:32.617535 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:32.620482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:32.624951 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:32.629486 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:32.631335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:32.656203 ignition[1083]: INFO : Ignition 2.19.0 Jan 29 12:01:32.656203 ignition[1083]: INFO : Stage: files Jan 29 12:01:32.660275 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:32.660275 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:32.660275 ignition[1083]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:01:32.689125 ignition[1083]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:01:32.689125 ignition[1083]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:01:32.778707 ignition[1083]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:01:32.782826 ignition[1083]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:01:32.782826 ignition[1083]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:01:32.780539 unknown[1083]: wrote ssh authorized keys file for user: core Jan 29 12:01:32.818885 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:32.823618 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:32.866483 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:01:33.067069 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:33.067069 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:33.075809 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:33.075809 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:33.083708 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:33.083708 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:33.091594 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:33.095503 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:33.099845 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:33.104091 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:33.108347 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:33.112700 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.118498 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.124116 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.129625 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:01:33.639698 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:01:34.003340 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:34.003340 ignition[1083]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:01:34.026910 ignition[1083]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: files passed Jan 29 12:01:34.031770 ignition[1083]: INFO : Ignition finished successfully Jan 29 12:01:34.029041 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:01:34.055983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:01:34.074641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:01:34.077917 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:01:34.079861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:01:34.102702 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.102702 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.115694 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.106819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:34.110402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:01:34.126759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:01:34.152943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:01:34.153066 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:01:34.159321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:01:34.164160 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:01:34.170994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:01:34.177735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:01:34.190817 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:34.198618 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:01:34.209874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:34.214855 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:34.217813 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:01:34.224258 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:01:34.224414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:34.229692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:01:34.236769 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:01:34.240870 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:01:34.243428 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:34.251222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:34.251379 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:01:34.252151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:34.252521 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:01:34.252964 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:01:34.253307 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:01:34.254045 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:01:34.254205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:34.254807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:34.255269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:34.255602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:01:34.271852 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:34.277034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:01:34.284131 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:34.308699 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:01:34.308878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:34.317398 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:01:34.317571 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:01:34.322034 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:01:34.322181 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:34.335095 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:01:34.341752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:01:34.343911 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:01:34.344086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:34.354578 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:01:34.354732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:34.360734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:01:34.360827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:01:34.377704 ignition[1136]: INFO : Ignition 2.19.0 Jan 29 12:01:34.377704 ignition[1136]: INFO : Stage: umount Jan 29 12:01:34.383900 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:34.383900 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:34.383900 ignition[1136]: INFO : umount: umount passed Jan 29 12:01:34.383900 ignition[1136]: INFO : Ignition finished successfully Jan 29 12:01:34.380558 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:01:34.380677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:01:34.384028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:01:34.384082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:01:34.391219 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:01:34.391270 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:01:34.391574 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:01:34.391612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:01:34.391878 systemd[1]: Stopped target network.target - Network. Jan 29 12:01:34.392295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:01:34.392335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:34.392712 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:01:34.393054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:01:34.403807 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:34.410859 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:01:34.415013 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:01:34.444898 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:01:34.444966 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:34.451074 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:01:34.451131 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:34.457836 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:01:34.457901 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:01:34.462191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:01:34.462244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:34.467064 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:01:34.471559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:01:34.476991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:01:34.486514 systemd-networkd[898]: eth0: DHCPv6 lease lost Jan 29 12:01:34.489880 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:01:34.490033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:01:34.496939 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:01:34.497051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:01:34.502726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:01:34.502799 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:34.518581 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:01:34.520744 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:01:34.520814 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:34.528189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:01:34.528249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:34.528359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:01:34.528404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:34.529194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:01:34.529233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:34.531347 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:34.560104 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:01:34.560277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:34.565266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:01:34.565309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:34.570588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:01:34.573056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:34.577557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:01:34.577613 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:34.589031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:01:34.589101 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:34.601644 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: Data path switched from VF: enP20258s1 Jan 29 12:01:34.597006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:34.597057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:34.612659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:01:34.617863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:01:34.617944 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:34.623355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:01:34.623424 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:34.626161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:01:34.626215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:34.632609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:34.632670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:34.651283 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:01:34.651429 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:01:34.652171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:01:34.652255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:01:35.360152 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:01:35.360301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:01:35.360722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:01:35.360922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:01:35.360972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:35.371761 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:01:35.393501 systemd[1]: Switching root. Jan 29 12:01:35.469421 systemd-journald[176]: Journal stopped Jan 29 12:01:25.081556 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:01:25.081586 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.081597 kernel: BIOS-provided physical RAM map: Jan 29 12:01:25.081606 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 12:01:25.081612 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 12:01:25.081618 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 12:01:25.081628 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 12:01:25.081637 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 12:01:25.081646 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 12:01:25.081653 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 12:01:25.081660 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 12:01:25.081668 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 12:01:25.081674 kernel: printk: bootconsole [earlyser0] enabled Jan 29 12:01:25.081693 kernel: NX (Execute Disable) protection: active Jan 29 12:01:25.081706 kernel: APIC: Static calls initialized Jan 29 12:01:25.081721 kernel: efi: EFI v2.7 by Microsoft Jan 29 12:01:25.081731 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 29 12:01:25.081741 kernel: SMBIOS 3.1.0 present. Jan 29 12:01:25.081749 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 12:01:25.081759 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 12:01:25.081768 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 12:01:25.081778 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 12:01:25.081785 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 12:01:25.081793 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 12:01:25.081804 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 12:01:25.081812 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:25.081821 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:25.081829 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 12:01:25.081840 kernel: tsc: Detected 2593.908 MHz processor Jan 29 12:01:25.081848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:01:25.081858 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:01:25.081865 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 12:01:25.081875 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 12:01:25.081885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:01:25.081900 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 12:01:25.081907 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 12:01:25.081915 kernel: Using GB pages for direct mapping Jan 29 12:01:25.081924 kernel: Secure boot disabled Jan 29 12:01:25.081931 kernel: ACPI: Early table checksum verification disabled Jan 29 12:01:25.081941 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 12:01:25.081952 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.081965 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.081972 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 12:01:25.081983 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 12:01:25.081991 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082001 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082009 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082022 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082029 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082040 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082047 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:25.082057 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 12:01:25.082065 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 12:01:25.082074 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 12:01:25.082090 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 12:01:25.082103 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 12:01:25.082113 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 12:01:25.082124 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 12:01:25.082132 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 12:01:25.082142 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 12:01:25.082151 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 12:01:25.082160 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:01:25.082170 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:01:25.082177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 12:01:25.082190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 12:01:25.082197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 12:01:25.082208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 12:01:25.082216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 12:01:25.082226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 12:01:25.082234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 12:01:25.082245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 12:01:25.082252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 12:01:25.082262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 12:01:25.082273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 12:01:25.082283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 12:01:25.082291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 12:01:25.082300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 12:01:25.082310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 12:01:25.082318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 12:01:25.082328 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 12:01:25.082336 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 12:01:25.082346 kernel: Zone ranges: Jan 29 12:01:25.082356 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:01:25.082367 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:01:25.082374 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:25.082385 kernel: Movable zone start for each node Jan 29 12:01:25.082392 kernel: Early memory node ranges Jan 29 12:01:25.082403 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 12:01:25.082411 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 12:01:25.082420 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 12:01:25.082429 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:25.082447 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 12:01:25.082456 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:01:25.082466 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 12:01:25.082476 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 12:01:25.082486 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 12:01:25.082495 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 12:01:25.082504 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:01:25.082514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:01:25.082522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:01:25.082535 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 12:01:25.082542 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:01:25.082553 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 12:01:25.082561 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 12:01:25.082568 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:01:25.082576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:01:25.082584 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:01:25.082591 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:01:25.082599 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:01:25.082608 kernel: Hyper-V: PV spinlocks enabled Jan 29 12:01:25.082616 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:01:25.082624 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.082632 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:01:25.082639 kernel: random: crng init done Jan 29 12:01:25.082647 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:01:25.082654 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:01:25.082662 kernel: Fallback order for Node 0: 0 Jan 29 12:01:25.082693 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 12:01:25.082711 kernel: Policy zone: Normal Jan 29 12:01:25.082722 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:01:25.082731 kernel: software IO TLB: area num 2. Jan 29 12:01:25.082743 kernel: Memory: 8077012K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310188K reserved, 0K cma-reserved) Jan 29 12:01:25.082751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:01:25.082759 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:01:25.082767 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:01:25.082775 kernel: Dynamic Preempt: voluntary Jan 29 12:01:25.082783 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:01:25.082798 kernel: rcu: RCU event tracing is enabled. Jan 29 12:01:25.082809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:01:25.082820 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:01:25.082828 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:01:25.082836 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:01:25.082844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:01:25.082857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:01:25.082866 kernel: Using NULL legacy PIC Jan 29 12:01:25.082875 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 12:01:25.082885 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:01:25.082893 kernel: Console: colour dummy device 80x25 Jan 29 12:01:25.082904 kernel: printk: console [tty1] enabled Jan 29 12:01:25.082912 kernel: printk: console [ttyS0] enabled Jan 29 12:01:25.082923 kernel: printk: bootconsole [earlyser0] disabled Jan 29 12:01:25.082931 kernel: ACPI: Core revision 20230628 Jan 29 12:01:25.082942 kernel: Failed to register legacy timer interrupt Jan 29 12:01:25.082953 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:01:25.082964 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 12:01:25.082972 kernel: Hyper-V: Using IPI hypercalls Jan 29 12:01:25.082983 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 12:01:25.082991 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 12:01:25.083007 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 12:01:25.083017 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 12:01:25.083027 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 12:01:25.083038 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 12:01:25.083052 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Jan 29 12:01:25.083060 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:01:25.083070 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:01:25.083078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:01:25.083089 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:01:25.083097 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:01:25.083108 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:01:25.083116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:01:25.083126 kernel: RETBleed: Vulnerable Jan 29 12:01:25.083138 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:01:25.083147 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:25.083156 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:25.083164 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:01:25.083172 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:01:25.083181 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:01:25.083197 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:01:25.083205 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:01:25.083213 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:01:25.083224 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:01:25.083232 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:01:25.083245 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 12:01:25.083254 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 12:01:25.083264 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 12:01:25.083273 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 12:01:25.083283 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:01:25.083292 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:01:25.083300 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:01:25.083311 kernel: landlock: Up and running. Jan 29 12:01:25.083319 kernel: SELinux: Initializing. Jan 29 12:01:25.083328 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.083338 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.083346 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:01:25.083359 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083368 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083773 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:25.083794 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:01:25.083810 kernel: signal: max sigframe size: 3632 Jan 29 12:01:25.083825 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:01:25.083841 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:01:25.083857 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:01:25.083872 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:01:25.083892 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:01:25.083907 kernel: .... node #0, CPUs: #1 Jan 29 12:01:25.083923 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 12:01:25.083940 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:01:25.083954 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:01:25.083969 kernel: smpboot: Max logical packages: 1 Jan 29 12:01:25.083984 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 29 12:01:25.083999 kernel: devtmpfs: initialized Jan 29 12:01:25.084017 kernel: x86/mm: Memory block size: 128MB Jan 29 12:01:25.084032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 12:01:25.084048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:01:25.084063 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:01:25.084078 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:01:25.084093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:01:25.084108 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:01:25.084123 kernel: audit: type=2000 audit(1738152083.027:1): state=initialized audit_enabled=0 res=1 Jan 29 12:01:25.084138 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:01:25.084155 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:01:25.084170 kernel: cpuidle: using governor menu Jan 29 12:01:25.084185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:01:25.084199 kernel: dca service started, version 1.12.1 Jan 29 12:01:25.084215 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 12:01:25.084234 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:01:25.084249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:01:25.084264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:01:25.084279 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:01:25.084296 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:01:25.084312 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:01:25.084327 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:01:25.084342 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:01:25.084357 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:01:25.084372 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:01:25.084387 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:01:25.084402 kernel: ACPI: Interpreter enabled Jan 29 12:01:25.084417 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:01:25.084435 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:01:25.084450 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:01:25.084465 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:01:25.084480 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 12:01:25.084495 kernel: iommu: Default domain type: Translated Jan 29 12:01:25.084511 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:01:25.084526 kernel: efivars: Registered efivars operations Jan 29 12:01:25.084540 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:01:25.084555 kernel: PCI: System does not support PCI Jan 29 12:01:25.084572 kernel: vgaarb: loaded Jan 29 12:01:25.084587 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 12:01:25.084602 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:01:25.084617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:01:25.084633 kernel: pnp: PnP ACPI init Jan 29 12:01:25.084647 kernel: pnp: PnP ACPI: found 3 devices Jan 29 12:01:25.084662 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:01:25.088370 kernel: NET: Registered PF_INET protocol family Jan 29 12:01:25.088406 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:01:25.088429 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:01:25.088443 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:01:25.088458 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:01:25.088474 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:01:25.088490 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:01:25.088505 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.088520 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:25.088535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:01:25.088551 kernel: NET: Registered PF_XDP protocol family Jan 29 12:01:25.088569 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:01:25.088585 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:01:25.088600 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Jan 29 12:01:25.088615 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:01:25.088631 kernel: Initialise system trusted keyrings Jan 29 12:01:25.088645 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:01:25.088661 kernel: Key type asymmetric registered Jan 29 12:01:25.088676 kernel: Asymmetric key parser 'x509' registered Jan 29 12:01:25.088710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:01:25.088725 kernel: io scheduler mq-deadline registered Jan 29 12:01:25.088737 kernel: io scheduler kyber registered Jan 29 12:01:25.088748 kernel: io scheduler bfq registered Jan 29 12:01:25.088760 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:01:25.088772 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:01:25.088784 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:01:25.088797 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:01:25.088810 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:01:25.089021 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 12:01:25.089150 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T12:01:24 UTC (1738152084) Jan 29 12:01:25.089263 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 12:01:25.089281 kernel: intel_pstate: CPU model not supported Jan 29 12:01:25.089296 kernel: efifb: probing for efifb Jan 29 12:01:25.089310 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 12:01:25.089324 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 12:01:25.089337 kernel: efifb: scrolling: redraw Jan 29 12:01:25.089353 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 12:01:25.089368 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:25.089380 kernel: fb0: EFI VGA frame buffer device Jan 29 12:01:25.089394 kernel: pstore: Using crash dump compression: deflate Jan 29 12:01:25.089408 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:01:25.089423 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:01:25.089437 kernel: Segment Routing with IPv6 Jan 29 12:01:25.089450 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:01:25.089465 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:01:25.089479 kernel: Key type dns_resolver registered Jan 29 12:01:25.089496 kernel: IPI shorthand broadcast: enabled Jan 29 12:01:25.089511 kernel: sched_clock: Marking stable (822004500, 40891100)->(1053032100, -190136500) Jan 29 12:01:25.089526 kernel: registered taskstats version 1 Jan 29 12:01:25.089539 kernel: Loading compiled-in X.509 certificates Jan 29 12:01:25.089554 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:01:25.089567 kernel: Key type .fscrypt registered Jan 29 12:01:25.089580 kernel: Key type fscrypt-provisioning registered Jan 29 12:01:25.089594 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:01:25.089611 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:01:25.089625 kernel: ima: No architecture policies found Jan 29 12:01:25.089639 kernel: clk: Disabling unused clocks Jan 29 12:01:25.089653 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:01:25.089666 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:01:25.089690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:01:25.089703 kernel: Run /init as init process Jan 29 12:01:25.089717 kernel: with arguments: Jan 29 12:01:25.089729 kernel: /init Jan 29 12:01:25.089747 kernel: with environment: Jan 29 12:01:25.089760 kernel: HOME=/ Jan 29 12:01:25.089774 kernel: TERM=linux Jan 29 12:01:25.089787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:01:25.089806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:25.089825 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:25.089839 systemd[1]: Detected architecture x86-64. Jan 29 12:01:25.089853 systemd[1]: Running in initrd. Jan 29 12:01:25.089872 systemd[1]: No hostname configured, using default hostname. Jan 29 12:01:25.089894 systemd[1]: Hostname set to . Jan 29 12:01:25.089909 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:25.089925 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:01:25.089943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:25.089957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:25.089974 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:01:25.089989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:25.090007 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:01:25.090021 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:01:25.090036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:01:25.090049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:01:25.090064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:25.090079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:25.090093 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:25.090113 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:25.090129 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:25.090145 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:25.090157 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:25.090171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:25.090184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:25.090198 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:25.090210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:25.090221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:25.090232 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:25.090241 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:25.090250 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:01:25.090259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:25.090267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:01:25.090282 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:01:25.090295 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:25.090309 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:25.090325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:25.090373 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 12:01:25.090405 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:25.090419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:25.090438 systemd-journald[176]: Journal started Jan 29 12:01:25.090485 systemd-journald[176]: Runtime Journal (/run/log/journal/7e005bff65ed42fdaab3d7f62c08dbf4) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:25.074049 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 12:01:25.098139 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:25.096399 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:01:25.105192 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:25.107093 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:25.119391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:25.121827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:25.137065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:25.142973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:25.150197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:25.165921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:25.182073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:01:25.187126 kernel: Bridge firewalling registered Jan 29 12:01:25.186373 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 12:01:25.187998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:25.195145 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:25.208830 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:01:25.214542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:25.222368 dracut-cmdline[208]: dracut-dracut-053 Jan 29 12:01:25.225908 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:25.253175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:25.264965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:25.314145 systemd-resolved[253]: Positive Trust Anchors: Jan 29 12:01:25.314167 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:25.322067 kernel: SCSI subsystem initialized Jan 29 12:01:25.314239 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:25.339885 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 29 12:01:25.349361 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:01:25.340957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:25.344457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:25.360703 kernel: iscsi: registered transport (tcp) Jan 29 12:01:25.382982 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:01:25.383091 kernel: QLogic iSCSI HBA Driver Jan 29 12:01:25.419773 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:25.424995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:01:25.456541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:01:25.456642 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:01:25.460889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:01:25.501711 kernel: raid6: avx512x4 gen() 18421 MB/s Jan 29 12:01:25.520693 kernel: raid6: avx512x2 gen() 18133 MB/s Jan 29 12:01:25.539691 kernel: raid6: avx512x1 gen() 18213 MB/s Jan 29 12:01:25.558697 kernel: raid6: avx2x4 gen() 18039 MB/s Jan 29 12:01:25.576693 kernel: raid6: avx2x2 gen() 17893 MB/s Jan 29 12:01:25.596474 kernel: raid6: avx2x1 gen() 13719 MB/s Jan 29 12:01:25.596519 kernel: raid6: using algorithm avx512x4 gen() 18421 MB/s Jan 29 12:01:25.617973 kernel: raid6: .... xor() 6528 MB/s, rmw enabled Jan 29 12:01:25.618015 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:01:25.639711 kernel: xor: automatically using best checksumming function avx Jan 29 12:01:25.791709 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:01:25.801716 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:25.808961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:25.828674 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 29 12:01:25.835315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:25.849950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:01:25.867769 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 29 12:01:25.896532 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:25.904023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:25.945226 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:25.957952 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:01:25.994624 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:26.000545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:26.005300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:26.016699 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:26.026880 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:01:26.040708 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:01:26.063706 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:01:26.066702 kernel: AES CTR mode by8 optimization enabled Jan 29 12:01:26.067167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:26.082568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:26.082843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:26.084791 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:26.085465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:26.085652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.088285 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.114822 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 12:01:26.097206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.119892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:26.120022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.132886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:26.138574 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 12:01:26.145458 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 12:01:26.145509 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:01:26.151201 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:01:26.158706 kernel: PTP clock support registered Jan 29 12:01:26.173302 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 12:01:26.173408 kernel: hv_vmbus: registering driver hv_utils Jan 29 12:01:26.176752 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 12:01:26.176810 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 12:01:26.181701 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 12:01:26.663049 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 29 12:01:26.665275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:26.682234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:26.691925 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 12:01:26.691960 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:01:26.698640 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 12:01:26.702808 kernel: scsi host0: storvsc_host_t Jan 29 12:01:26.702892 kernel: scsi host1: storvsc_host_t Jan 29 12:01:26.710176 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 12:01:26.710302 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 12:01:26.715371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 12:01:26.719493 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 12:01:26.723488 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 12:01:26.734919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:26.753621 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 12:01:26.755903 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:01:26.755930 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 12:01:26.764711 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 12:01:26.777581 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:01:26.777790 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:01:26.777967 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 12:01:26.778143 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 12:01:26.778331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:26.778353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:01:26.895268 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: VF slot 1 added Jan 29 12:01:26.903488 kernel: hv_vmbus: registering driver hv_pci Jan 29 12:01:26.907492 kernel: hv_pci c1992161-4f22-4de8-a7f6-d10aab52a9ac: PCI VMBus probing: Using version 0x10004 Jan 29 12:01:26.949660 kernel: hv_pci c1992161-4f22-4de8-a7f6-d10aab52a9ac: PCI host bridge to bus 4f22:00 Jan 29 12:01:26.951007 kernel: pci_bus 4f22:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 12:01:26.951202 kernel: pci_bus 4f22:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 12:01:26.951347 kernel: pci 4f22:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 12:01:26.951575 kernel: pci 4f22:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:26.951753 kernel: pci 4f22:00:02.0: enabling Extended Tags Jan 29 12:01:26.951920 kernel: pci 4f22:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4f22:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 12:01:26.952099 kernel: pci_bus 4f22:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 12:01:26.952243 kernel: pci 4f22:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:27.123710 kernel: mlx5_core 4f22:00:02.0: enabling device (0000 -> 0002) Jan 29 12:01:27.377245 kernel: mlx5_core 4f22:00:02.0: firmware version: 14.30.5000 Jan 29 12:01:27.377858 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (455) Jan 29 12:01:27.377893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) Jan 29 12:01:27.377914 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: VF registering: eth1 Jan 29 12:01:27.378087 kernel: mlx5_core 4f22:00:02.0 eth1: joined to eth0 Jan 29 12:01:27.378266 kernel: mlx5_core 4f22:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:01:27.264873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 12:01:27.354204 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 12:01:27.367357 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 12:01:27.371500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 12:01:27.393215 kernel: mlx5_core 4f22:00:02.0 enP20258s1: renamed from eth1 Jan 29 12:01:27.393979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:27.409949 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:01:27.423491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:27.433240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:28.441575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:28.443074 disk-uuid[601]: The operation has completed successfully. Jan 29 12:01:28.534250 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:01:28.534370 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:01:28.549616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:01:28.553189 sh[714]: Success Jan 29 12:01:28.582978 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:01:28.821635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:01:28.834598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:01:28.837955 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:01:28.858354 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:01:28.858447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:28.861797 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:01:28.864667 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:01:28.866993 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:01:29.229663 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:01:29.235656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:01:29.244651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:01:29.249591 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:01:29.268578 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:29.274660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:29.274733 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:29.294541 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:29.306515 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:01:29.311421 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:29.316005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:01:29.324765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:01:29.356942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:29.364818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:29.386780 systemd-networkd[898]: lo: Link UP Jan 29 12:01:29.386791 systemd-networkd[898]: lo: Gained carrier Jan 29 12:01:29.389002 systemd-networkd[898]: Enumeration completed Jan 29 12:01:29.389321 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:29.393905 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:29.393910 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:29.395574 systemd[1]: Reached target network.target - Network. Jan 29 12:01:29.448492 kernel: mlx5_core 4f22:00:02.0 enP20258s1: Link up Jan 29 12:01:29.485512 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: Data path switched to VF: enP20258s1 Jan 29 12:01:29.486489 systemd-networkd[898]: enP20258s1: Link UP Jan 29 12:01:29.486681 systemd-networkd[898]: eth0: Link UP Jan 29 12:01:29.486908 systemd-networkd[898]: eth0: Gained carrier Jan 29 12:01:29.486927 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:29.493722 systemd-networkd[898]: enP20258s1: Gained carrier Jan 29 12:01:29.520539 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:30.471275 ignition[849]: Ignition 2.19.0 Jan 29 12:01:30.471287 ignition[849]: Stage: fetch-offline Jan 29 12:01:30.471332 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.471344 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.471449 ignition[849]: parsed url from cmdline: "" Jan 29 12:01:30.471453 ignition[849]: no config URL provided Jan 29 12:01:30.471460 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:30.471483 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:30.471490 ignition[849]: failed to fetch config: resource requires networking Jan 29 12:01:30.471733 ignition[849]: Ignition finished successfully Jan 29 12:01:30.485948 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:30.497801 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:01:30.513597 ignition[906]: Ignition 2.19.0 Jan 29 12:01:30.513608 ignition[906]: Stage: fetch Jan 29 12:01:30.513847 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.513863 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.514008 ignition[906]: parsed url from cmdline: "" Jan 29 12:01:30.514013 ignition[906]: no config URL provided Jan 29 12:01:30.514020 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:30.514029 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:30.514050 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 12:01:30.596851 ignition[906]: GET result: OK Jan 29 12:01:30.597000 ignition[906]: config has been read from IMDS userdata Jan 29 12:01:30.597040 ignition[906]: parsing config with SHA512: 3072c82b64a63870254e1557834f28818aebb2fb965abaa0d214ae778eb5d51547be381e33fa1105df4bd9cfb37917ddb486e814a38cba02fcdbcc2ee6f03c12 Jan 29 12:01:30.603177 unknown[906]: fetched base config from "system" Jan 29 12:01:30.603194 unknown[906]: fetched base config from "system" Jan 29 12:01:30.603605 ignition[906]: fetch: fetch complete Jan 29 12:01:30.603200 unknown[906]: fetched user config from "azure" Jan 29 12:01:30.603610 ignition[906]: fetch: fetch passed Jan 29 12:01:30.605600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:01:30.603656 ignition[906]: Ignition finished successfully Jan 29 12:01:30.620677 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:01:30.640308 ignition[912]: Ignition 2.19.0 Jan 29 12:01:30.640320 ignition[912]: Stage: kargs Jan 29 12:01:30.640567 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.643986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:01:30.640583 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.641922 ignition[912]: kargs: kargs passed Jan 29 12:01:30.641973 ignition[912]: Ignition finished successfully Jan 29 12:01:30.661782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:01:30.678865 ignition[918]: Ignition 2.19.0 Jan 29 12:01:30.678877 ignition[918]: Stage: disks Jan 29 12:01:30.681063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:01:30.679105 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:30.685215 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:30.679119 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:30.689432 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:30.680000 ignition[918]: disks: disks passed Jan 29 12:01:30.692265 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:30.680046 ignition[918]: Ignition finished successfully Jan 29 12:01:30.696916 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:30.700244 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:30.722696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:01:30.780302 systemd-fsck[926]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 12:01:30.785834 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:01:30.796736 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:01:30.893491 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:01:30.894016 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:01:30.898377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:30.941604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:30.953458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (937) Jan 29 12:01:30.951558 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:01:30.956205 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:01:30.958945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:01:30.958986 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:30.975461 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:01:30.986699 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:01:31.003782 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:31.003882 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:31.006149 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:31.012497 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:31.013957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:31.098794 systemd-networkd[898]: enP20258s1: Gained IPv6LL Jan 29 12:01:31.290649 systemd-networkd[898]: eth0: Gained IPv6LL Jan 29 12:01:31.784142 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:01:31.794237 coreos-metadata[939]: Jan 29 12:01:31.794 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:31.800178 coreos-metadata[939]: Jan 29 12:01:31.800 INFO Fetch successful Jan 29 12:01:31.802637 coreos-metadata[939]: Jan 29 12:01:31.800 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:31.811907 coreos-metadata[939]: Jan 29 12:01:31.811 INFO Fetch successful Jan 29 12:01:31.814327 coreos-metadata[939]: Jan 29 12:01:31.812 INFO wrote hostname ci-4081.3.0-a-56ab0c4267 to /sysroot/etc/hostname Jan 29 12:01:31.815730 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:31.826263 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:01:31.844085 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:01:31.865279 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:01:32.519835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:32.527568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:01:32.533667 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:01:32.542005 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:01:32.544636 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:32.579485 ignition[1055]: INFO : Ignition 2.19.0 Jan 29 12:01:32.579485 ignition[1055]: INFO : Stage: mount Jan 29 12:01:32.579485 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:32.579485 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:32.593344 ignition[1055]: INFO : mount: mount passed Jan 29 12:01:32.593344 ignition[1055]: INFO : Ignition finished successfully Jan 29 12:01:32.580977 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:01:32.584073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:01:32.595549 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:01:32.602663 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:32.617490 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1067) Jan 29 12:01:32.617535 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:32.620482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:32.624951 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:32.629486 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:32.631335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:32.656203 ignition[1083]: INFO : Ignition 2.19.0 Jan 29 12:01:32.656203 ignition[1083]: INFO : Stage: files Jan 29 12:01:32.660275 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:32.660275 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:32.660275 ignition[1083]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:01:32.689125 ignition[1083]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:01:32.689125 ignition[1083]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:01:32.778707 ignition[1083]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:01:32.782826 ignition[1083]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:01:32.782826 ignition[1083]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:01:32.780539 unknown[1083]: wrote ssh authorized keys file for user: core Jan 29 12:01:32.818885 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:32.823618 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:32.866483 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:01:33.067069 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:33.067069 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:33.075809 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:33.075809 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:33.083708 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:33.083708 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:33.091594 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:33.095503 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:33.099845 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:33.104091 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:33.108347 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:33.112700 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.118498 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.124116 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:33.129625 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:01:33.639698 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:01:34.003340 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:01:34.003340 ignition[1083]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:01:34.026910 ignition[1083]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:34.031770 ignition[1083]: INFO : files: files passed Jan 29 12:01:34.031770 ignition[1083]: INFO : Ignition finished successfully Jan 29 12:01:34.029041 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:01:34.055983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:01:34.074641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:01:34.077917 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:01:34.079861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:01:34.102702 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.102702 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.115694 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:34.106819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:34.110402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:01:34.126759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:01:34.152943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:01:34.153066 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:01:34.159321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:01:34.164160 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:01:34.170994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:01:34.177735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:01:34.190817 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:34.198618 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:01:34.209874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:34.214855 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:34.217813 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:01:34.224258 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:01:34.224414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:34.229692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:01:34.236769 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:01:34.240870 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:01:34.243428 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:34.251222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:34.251379 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:01:34.252151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:34.252521 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:01:34.252964 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:01:34.253307 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:01:34.254045 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:01:34.254205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:34.254807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:34.255269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:34.255602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:01:34.271852 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:34.277034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:01:34.284131 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:34.308699 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:01:34.308878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:34.317398 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:01:34.317571 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:01:34.322034 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:01:34.322181 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:34.335095 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:01:34.341752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:01:34.343911 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:01:34.344086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:34.354578 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:01:34.354732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:34.360734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:01:34.360827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:01:34.377704 ignition[1136]: INFO : Ignition 2.19.0 Jan 29 12:01:34.377704 ignition[1136]: INFO : Stage: umount Jan 29 12:01:34.383900 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:34.383900 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:34.383900 ignition[1136]: INFO : umount: umount passed Jan 29 12:01:34.383900 ignition[1136]: INFO : Ignition finished successfully Jan 29 12:01:34.380558 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:01:34.380677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:01:34.384028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:01:34.384082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:01:34.391219 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:01:34.391270 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:01:34.391574 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:01:34.391612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:01:34.391878 systemd[1]: Stopped target network.target - Network. Jan 29 12:01:34.392295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:01:34.392335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:34.392712 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:01:34.393054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:01:34.403807 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:34.410859 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:01:34.415013 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:01:34.444898 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:01:34.444966 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:34.451074 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:01:34.451131 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:34.457836 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:01:34.457901 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:01:34.462191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:01:34.462244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:34.467064 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:01:34.471559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:01:34.476991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:01:34.486514 systemd-networkd[898]: eth0: DHCPv6 lease lost Jan 29 12:01:34.489880 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:01:34.490033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:01:34.496939 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:01:34.497051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:01:34.502726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:01:34.502799 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:34.518581 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:01:34.520744 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:01:34.520814 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:34.528189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:01:34.528249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:34.528359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:01:34.528404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:34.529194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:01:34.529233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:34.531347 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:34.560104 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:01:34.560277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:34.565266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:01:34.565309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:34.570588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:01:34.573056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:34.577557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:01:34.577613 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:34.589031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:01:34.589101 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:34.601644 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: Data path switched from VF: enP20258s1 Jan 29 12:01:34.597006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:34.597057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:34.612659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:01:34.617863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:01:34.617944 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:34.623355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:01:34.623424 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:34.626161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:01:34.626215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:34.632609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:34.632670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:34.651283 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:01:34.651429 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:01:34.652171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:01:34.652255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:01:35.360152 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:01:35.360301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:01:35.360722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:01:35.360922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:01:35.360972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:35.371761 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:01:35.393501 systemd[1]: Switching root. Jan 29 12:01:35.469421 systemd-journald[176]: Journal stopped Jan 29 12:01:39.961410 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 29 12:01:39.961454 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:01:39.961483 kernel: SELinux: policy capability open_perms=1 Jan 29 12:01:39.961493 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:01:39.961500 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:01:39.961509 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:01:39.961528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:01:39.961552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:01:39.961567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:01:39.961576 kernel: audit: type=1403 audit(1738152096.386:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:01:39.961585 systemd[1]: Successfully loaded SELinux policy in 126.062ms. Jan 29 12:01:39.961596 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.805ms. Jan 29 12:01:39.961618 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:39.961633 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:39.961645 systemd[1]: Detected architecture x86-64. Jan 29 12:01:39.961655 systemd[1]: Detected first boot. Jan 29 12:01:39.961668 systemd[1]: Hostname set to . Jan 29 12:01:39.961689 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:39.961711 zram_generator::config[1177]: No configuration found. Jan 29 12:01:39.961728 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:01:39.961738 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:01:39.961755 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:01:39.961777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:01:39.961791 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:01:39.961802 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:01:39.961822 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:01:39.961836 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:01:39.961848 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:01:39.961870 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:01:39.961885 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:01:39.961896 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:01:39.961919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:39.961937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:39.961948 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:01:39.961967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:01:39.961990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:01:39.962007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:39.962018 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:01:39.962037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:39.962064 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:01:39.962080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:01:39.962097 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:39.962119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:01:39.962130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:39.962148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:39.962169 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:39.962180 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:39.962194 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:01:39.962215 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:01:39.962242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:39.962253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:39.962270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:39.962294 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:01:39.962312 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:01:39.962326 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:01:39.962343 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:01:39.962364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:39.962376 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:01:39.962396 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:01:39.962419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:01:39.962449 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:01:39.962461 systemd[1]: Reached target machines.target - Containers. Jan 29 12:01:39.962493 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:01:39.962509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:39.962520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:39.962542 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:01:39.962557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:39.962567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:39.962591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:39.962608 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:01:39.962619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:39.962643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:01:39.962661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:01:39.962672 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:01:39.962694 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:01:39.962709 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:01:39.962723 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:39.962737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:39.962752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:01:39.962770 kernel: loop: module loaded Jan 29 12:01:39.962784 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:01:39.962800 kernel: fuse: init (API version 7.39) Jan 29 12:01:39.962811 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:39.965914 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:01:39.965956 systemd[1]: Stopped verity-setup.service. Jan 29 12:01:39.965974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:39.965992 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:01:39.966009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:01:39.966033 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:01:39.966048 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:01:39.966064 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:01:39.966079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:01:39.966095 kernel: ACPI: bus type drm_connector registered Jan 29 12:01:39.966148 systemd-journald[1283]: Collecting audit messages is disabled. Jan 29 12:01:39.966184 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:01:39.966200 systemd-journald[1283]: Journal started Jan 29 12:01:39.966232 systemd-journald[1283]: Runtime Journal (/run/log/journal/f4dc36ee9ecc458fb8209c439dbc8b57) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:39.215116 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:01:39.296657 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 12:01:39.297102 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:01:39.973060 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:39.973824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:39.977750 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:01:39.977928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:01:39.981329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:39.981556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:39.985173 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:39.985402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:39.988844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:39.989031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:39.992548 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:01:39.992743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:01:39.995680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:39.995851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:39.998993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:40.001987 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:01:40.005144 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:01:40.025926 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:01:40.036578 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:01:40.050549 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:01:40.054175 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:01:40.054243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:40.061675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:01:40.080723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:01:40.087950 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:01:40.090786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:40.099702 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:01:40.103851 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:01:40.108568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:40.112705 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:01:40.119713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:40.121719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:40.128692 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:01:40.136104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:40.143975 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:40.148821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:01:40.152106 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:01:40.158915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:01:40.175231 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:01:40.178782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:01:40.185019 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:01:40.197701 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:01:40.203690 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:01:40.205782 systemd-journald[1283]: Time spent on flushing to /var/log/journal/f4dc36ee9ecc458fb8209c439dbc8b57 is 36.262ms for 967 entries. Jan 29 12:01:40.205782 systemd-journald[1283]: System Journal (/var/log/journal/f4dc36ee9ecc458fb8209c439dbc8b57) is 8.0M, max 2.6G, 2.6G free. Jan 29 12:01:40.322961 systemd-journald[1283]: Received client request to flush runtime journal. Jan 29 12:01:40.323163 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:01:40.237516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:40.325094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:01:40.370943 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:01:40.371719 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:01:40.418954 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 29 12:01:40.418983 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 29 12:01:40.425906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:40.434750 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:01:40.703847 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:01:40.712744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:40.735988 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jan 29 12:01:40.736019 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jan 29 12:01:40.743612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:40.790672 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:01:40.829503 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 12:01:41.187496 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 12:01:41.229503 kernel: loop3: detected capacity change from 0 to 31056 Jan 29 12:01:41.602281 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:01:41.611784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:41.624488 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:01:41.641573 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 12:01:41.653366 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Jan 29 12:01:41.656494 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 12:01:41.665500 kernel: loop7: detected capacity change from 0 to 31056 Jan 29 12:01:41.669010 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 12:01:41.669742 (sd-merge)[1343]: Merged extensions into '/usr'. Jan 29 12:01:41.674326 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:01:41.674344 systemd[1]: Reloading... Jan 29 12:01:41.755523 zram_generator::config[1378]: No configuration found. Jan 29 12:01:41.893840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:41.994693 systemd[1]: Reloading finished in 319 ms. Jan 29 12:01:42.031060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:42.043225 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:01:42.061970 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:01:42.072755 systemd[1]: Starting ensure-sysext.service... Jan 29 12:01:42.083966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:42.097027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:42.129604 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 12:01:42.136357 kernel: hv_vmbus: registering driver hv_balloon Jan 29 12:01:42.138936 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 12:01:42.138991 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 12:01:42.152124 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 12:01:42.152279 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:01:42.156333 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:42.166660 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:01:42.181634 systemd[1]: Reloading requested from client PID 1450 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:01:42.181812 systemd[1]: Reloading... Jan 29 12:01:42.284213 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:01:42.297938 systemd-tmpfiles[1453]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:01:42.299804 systemd-tmpfiles[1453]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:01:42.301344 systemd-tmpfiles[1453]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:01:42.301802 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jan 29 12:01:42.301880 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jan 29 12:01:42.337966 systemd-tmpfiles[1453]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:42.337982 systemd-tmpfiles[1453]: Skipping /boot Jan 29 12:01:42.415219 systemd-tmpfiles[1453]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:42.415236 systemd-tmpfiles[1453]: Skipping /boot Jan 29 12:01:42.500642 zram_generator::config[1502]: No configuration found. Jan 29 12:01:42.561861 systemd-networkd[1452]: lo: Link UP Jan 29 12:01:42.564521 systemd-networkd[1452]: lo: Gained carrier Jan 29 12:01:42.596792 systemd-networkd[1452]: Enumeration completed Jan 29 12:01:42.597451 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:42.597456 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:42.685513 kernel: mlx5_core 4f22:00:02.0 enP20258s1: Link up Jan 29 12:01:42.708892 kernel: hv_netvsc 000d3ab8-a354-000d-3ab8-a354000d3ab8 eth0: Data path switched to VF: enP20258s1 Jan 29 12:01:42.717513 systemd-networkd[1452]: enP20258s1: Link UP Jan 29 12:01:42.717811 systemd-networkd[1452]: eth0: Link UP Jan 29 12:01:42.718359 systemd-networkd[1452]: eth0: Gained carrier Jan 29 12:01:42.718455 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:42.726142 systemd-networkd[1452]: enP20258s1: Gained carrier Jan 29 12:01:42.747497 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 29 12:01:42.757594 systemd-networkd[1452]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:42.790424 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1440) Jan 29 12:01:42.883868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:42.965223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:42.969249 systemd[1]: Reloading finished in 786 ms. Jan 29 12:01:42.987068 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:01:42.992276 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:43.000038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:43.042280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:43.047918 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:01:43.056672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:01:43.060249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:43.063036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:43.068719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:43.078894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:43.082612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:43.088560 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:01:43.098876 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:01:43.108885 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:01:43.119994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:43.125162 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:01:43.127160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:43.127412 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:43.130373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:43.130837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:43.131581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:43.131821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:43.132785 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:43.133024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:43.137902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:43.138132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:43.151385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:43.152985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:43.162670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:43.183987 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:43.210320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:43.220904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:43.224206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:43.225082 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:01:43.231585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:43.234339 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:01:43.238697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:43.238922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:43.248817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:43.249023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:43.253836 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:43.254026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:43.259636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:43.259849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:43.263557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:43.263735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:43.266453 systemd-resolved[1614]: Positive Trust Anchors: Jan 29 12:01:43.266482 systemd-resolved[1614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:43.266527 systemd-resolved[1614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:43.272165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:01:43.283567 systemd[1]: Finished ensure-sysext.service. Jan 29 12:01:43.292733 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:01:43.297557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:01:43.300600 augenrules[1636]: No rules Jan 29 12:01:43.301740 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:01:43.305125 systemd-resolved[1614]: Using system hostname 'ci-4081.3.0-a-56ab0c4267'. Jan 29 12:01:43.313713 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:01:43.317795 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:43.317848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:43.319043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:43.322941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:43.327498 systemd[1]: Reached target network.target - Network. Jan 29 12:01:43.332920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:43.434103 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:43.479818 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:01:43.483487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:43.490684 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:01:43.500435 lvm[1653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:43.529672 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:01:43.717911 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:01:43.721385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:01:43.824870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:44.602719 systemd-networkd[1452]: eth0: Gained IPv6LL Jan 29 12:01:44.606174 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:01:44.609986 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:01:44.730620 systemd-networkd[1452]: enP20258s1: Gained IPv6LL Jan 29 12:01:46.703660 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:01:46.715278 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:01:46.724791 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:01:46.756847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:01:46.759894 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:46.762732 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:01:46.765538 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:01:46.768654 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:01:46.771317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:01:46.774138 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:01:46.777073 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:01:46.777131 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:46.779208 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:46.782129 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:01:46.786342 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:01:46.814377 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:01:46.818092 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:01:46.821154 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:46.824020 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:46.826773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:46.826832 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:46.849602 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 12:01:46.853628 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:01:46.862627 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:01:46.867512 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:01:46.879083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:01:46.882794 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:01:46.885258 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:01:46.885320 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 12:01:46.887047 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 12:01:46.890535 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 12:01:46.892551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:46.905746 jq[1669]: false Jan 29 12:01:46.907642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:01:46.920547 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:01:46.926552 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:01:46.941409 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:01:46.948670 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:01:46.956839 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 12:01:46.960427 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:01:46.963728 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:01:46.964397 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:01:46.967678 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:01:46.978675 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:01:46.989106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:01:46.990554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:01:46.991019 KVP[1673]: KVP starting; pid is:1673 Jan 29 12:01:46.993987 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:01:46.994234 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:01:47.014080 chronyd[1703]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 12:01:47.022132 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:01:47.022395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:01:47.033967 jq[1690]: true Jan 29 12:01:47.035518 extend-filesystems[1670]: Found loop4 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found loop5 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found loop6 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found loop7 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda1 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda2 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda3 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found usr Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda4 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda6 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda7 Jan 29 12:01:47.038635 extend-filesystems[1670]: Found sda9 Jan 29 12:01:47.088558 extend-filesystems[1670]: Checking size of /dev/sda9 Jan 29 12:01:47.043811 chronyd[1703]: Timezone right/UTC failed leap second check, ignoring Jan 29 12:01:47.045849 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 12:01:47.095813 jq[1710]: true Jan 29 12:01:47.044042 chronyd[1703]: Loaded seccomp filter (level 2) Jan 29 12:01:47.071205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:01:47.100064 (ntainerd)[1705]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:01:47.110652 update_engine[1685]: I20250129 12:01:47.110546 1685 main.cc:92] Flatcar Update Engine starting Jan 29 12:01:47.116391 dbus-daemon[1668]: [system] SELinux support is enabled Jan 29 12:01:47.117134 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:01:47.124789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:01:47.124840 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:01:47.131297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:01:47.131326 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:01:47.145393 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:01:47.150826 update_engine[1685]: I20250129 12:01:47.150755 1685 update_check_scheduler.cc:74] Next update check in 7m40s Jan 29 12:01:47.158840 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:01:47.161885 extend-filesystems[1670]: Old size kept for /dev/sda9 Jan 29 12:01:47.162784 extend-filesystems[1670]: Found sr0 Jan 29 12:01:47.166432 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:01:47.168272 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:01:47.175933 KVP[1673]: KVP LIC Version: 3.1 Jan 29 12:01:47.177373 kernel: hv_utils: KVP IC version 4.0 Jan 29 12:01:47.218208 systemd-logind[1684]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:01:47.219707 systemd-logind[1684]: New seat seat0. Jan 29 12:01:47.224684 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:01:47.259562 tar[1694]: linux-amd64/helm Jan 29 12:01:47.307227 bash[1738]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:01:47.310190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:01:47.320564 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:01:47.329498 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1751) Jan 29 12:01:47.378586 coreos-metadata[1667]: Jan 29 12:01:47.378 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:47.390367 coreos-metadata[1667]: Jan 29 12:01:47.390 INFO Fetch successful Jan 29 12:01:47.390367 coreos-metadata[1667]: Jan 29 12:01:47.390 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 12:01:47.396584 coreos-metadata[1667]: Jan 29 12:01:47.396 INFO Fetch successful Jan 29 12:01:47.397192 coreos-metadata[1667]: Jan 29 12:01:47.397 INFO Fetching http://168.63.129.16/machine/89587d44-52e1-4309-b664-5017085168df/72e1326e%2D8c19%2D4af8%2Db685%2D4f77be7cee77.%5Fci%2D4081.3.0%2Da%2D56ab0c4267?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 12:01:47.409297 coreos-metadata[1667]: Jan 29 12:01:47.409 INFO Fetch successful Jan 29 12:01:47.415910 coreos-metadata[1667]: Jan 29 12:01:47.415 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:47.431597 coreos-metadata[1667]: Jan 29 12:01:47.430 INFO Fetch successful Jan 29 12:01:47.485727 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:01:47.521203 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:01:47.524748 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:01:47.529896 locksmithd[1725]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:01:47.570886 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:01:47.587613 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:01:47.601438 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 12:01:47.618193 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:01:47.619240 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:01:47.631841 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:01:47.667913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:01:47.675779 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:01:47.688942 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:01:47.693041 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:01:47.723710 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 12:01:48.061795 tar[1694]: linux-amd64/LICENSE Jan 29 12:01:48.061795 tar[1694]: linux-amd64/README.md Jan 29 12:01:48.077852 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:01:48.268924 containerd[1705]: time="2025-01-29T12:01:48.268742600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:01:48.305694 containerd[1705]: time="2025-01-29T12:01:48.305452600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307403200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307448700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307483300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307680300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307706300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307783700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.307800800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.308025500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.308047500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.308069800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308505 containerd[1705]: time="2025-01-29T12:01:48.308086400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.308961 containerd[1705]: time="2025-01-29T12:01:48.308214200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.309734 containerd[1705]: time="2025-01-29T12:01:48.309136700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:48.309734 containerd[1705]: time="2025-01-29T12:01:48.309324300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:48.309734 containerd[1705]: time="2025-01-29T12:01:48.309346900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:01:48.309734 containerd[1705]: time="2025-01-29T12:01:48.309491300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:01:48.309734 containerd[1705]: time="2025-01-29T12:01:48.309575700Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322033200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322107200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322132000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322155200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322180300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:01:48.322555 containerd[1705]: time="2025-01-29T12:01:48.322379200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:01:48.322826 containerd[1705]: time="2025-01-29T12:01:48.322706600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:01:48.322916 containerd[1705]: time="2025-01-29T12:01:48.322856100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:01:48.322969 containerd[1705]: time="2025-01-29T12:01:48.322884000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:01:48.322969 containerd[1705]: time="2025-01-29T12:01:48.322933100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:01:48.322969 containerd[1705]: time="2025-01-29T12:01:48.322953300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.322974200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.322991400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.323012200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.323035000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.323054500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323084 containerd[1705]: time="2025-01-29T12:01:48.323074800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323092400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323121800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323142700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323161700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323182000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323217200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323274 containerd[1705]: time="2025-01-29T12:01:48.323255200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323275200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323293900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323313100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323335200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323353400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323374100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323393700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323415500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323443300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323462300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323522 containerd[1705]: time="2025-01-29T12:01:48.323494300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323551400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323578000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323594000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323611600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323626400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323643900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323658200Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:01:48.323879 containerd[1705]: time="2025-01-29T12:01:48.323673300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:01:48.324414 containerd[1705]: time="2025-01-29T12:01:48.324087900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:01:48.324414 containerd[1705]: time="2025-01-29T12:01:48.324176200Z" level=info msg="Connect containerd service" Jan 29 12:01:48.324414 containerd[1705]: time="2025-01-29T12:01:48.324231100Z" level=info msg="using legacy CRI server" Jan 29 12:01:48.324414 containerd[1705]: time="2025-01-29T12:01:48.324241900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:01:48.324414 containerd[1705]: time="2025-01-29T12:01:48.324411100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325413700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325503800Z" level=info msg="Start subscribing containerd event" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325564300Z" level=info msg="Start recovering state" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325638900Z" level=info msg="Start event monitor" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325661100Z" level=info msg="Start snapshots syncer" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325673000Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:01:48.325968 containerd[1705]: time="2025-01-29T12:01:48.325682200Z" level=info msg="Start streaming server" Jan 29 12:01:48.326278 containerd[1705]: time="2025-01-29T12:01:48.326250600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:01:48.326383 containerd[1705]: time="2025-01-29T12:01:48.326352800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:01:48.327308 containerd[1705]: time="2025-01-29T12:01:48.326492700Z" level=info msg="containerd successfully booted in 0.059592s" Jan 29 12:01:48.327240 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:01:48.524334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:48.528092 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:01:48.532807 systemd[1]: Startup finished in 834ms (firmware) + 25.513s (loader) + 962ms (kernel) + 11.086s (initrd) + 12.271s (userspace) = 50.667s. Jan 29 12:01:48.533259 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:48.916609 login[1810]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 29 12:01:48.941756 login[1809]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:01:48.956508 systemd-logind[1684]: New session 2 of user core. Jan 29 12:01:48.959065 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:01:48.965897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:01:48.986578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:01:48.996966 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:01:49.003562 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:01:49.198936 kubelet[1829]: E0129 12:01:49.198801 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:49.201429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:49.201639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:49.309409 systemd[1841]: Queued start job for default target default.target. Jan 29 12:01:49.315618 systemd[1841]: Created slice app.slice - User Application Slice. Jan 29 12:01:49.315657 systemd[1841]: Reached target paths.target - Paths. Jan 29 12:01:49.315677 systemd[1841]: Reached target timers.target - Timers. Jan 29 12:01:49.318621 systemd[1841]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:01:49.337825 systemd[1841]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:01:49.337987 systemd[1841]: Reached target sockets.target - Sockets. Jan 29 12:01:49.338008 systemd[1841]: Reached target basic.target - Basic System. Jan 29 12:01:49.338057 systemd[1841]: Reached target default.target - Main User Target. Jan 29 12:01:49.338096 systemd[1841]: Startup finished in 325ms. Jan 29 12:01:49.338209 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:01:49.349675 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:01:49.641377 waagent[1812]: 2025-01-29T12:01:49.641175Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 12:01:49.644098 waagent[1812]: 2025-01-29T12:01:49.644017Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 29 12:01:49.646179 waagent[1812]: 2025-01-29T12:01:49.646107Z INFO Daemon Daemon Python: 3.11.9 Jan 29 12:01:49.648215 waagent[1812]: 2025-01-29T12:01:49.648149Z INFO Daemon Daemon Run daemon Jan 29 12:01:49.650240 waagent[1812]: 2025-01-29T12:01:49.650190Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 29 12:01:49.654519 waagent[1812]: 2025-01-29T12:01:49.654208Z INFO Daemon Daemon Using waagent for provisioning Jan 29 12:01:49.656712 waagent[1812]: 2025-01-29T12:01:49.656663Z INFO Daemon Daemon Activate resource disk Jan 29 12:01:49.659125 waagent[1812]: 2025-01-29T12:01:49.659073Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 12:01:49.665876 waagent[1812]: 2025-01-29T12:01:49.665816Z INFO Daemon Daemon Found device: None Jan 29 12:01:49.668322 waagent[1812]: 2025-01-29T12:01:49.668266Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 12:01:49.669859 waagent[1812]: 2025-01-29T12:01:49.669809Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 12:01:49.677929 waagent[1812]: 2025-01-29T12:01:49.677865Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 12:01:49.680798 waagent[1812]: 2025-01-29T12:01:49.680743Z INFO Daemon Daemon Running default provisioning handler Jan 29 12:01:49.690680 waagent[1812]: 2025-01-29T12:01:49.690608Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 12:01:49.697013 waagent[1812]: 2025-01-29T12:01:49.696956Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 12:01:49.701340 waagent[1812]: 2025-01-29T12:01:49.701287Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 12:01:49.701959 waagent[1812]: 2025-01-29T12:01:49.701909Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 12:01:49.797839 waagent[1812]: 2025-01-29T12:01:49.794728Z INFO Daemon Daemon Successfully mounted dvd Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.809572Z INFO Daemon Daemon Detect protocol endpoint Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.809885Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.810947Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.811334Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.812323Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 12:01:49.819510 waagent[1812]: 2025-01-29T12:01:49.813030Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 12:01:49.809754 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 12:01:49.837824 waagent[1812]: 2025-01-29T12:01:49.837759Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 12:01:49.844878 waagent[1812]: 2025-01-29T12:01:49.838338Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 12:01:49.844878 waagent[1812]: 2025-01-29T12:01:49.839059Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 12:01:49.919155 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:01:49.924812 systemd-logind[1684]: New session 1 of user core. Jan 29 12:01:49.931062 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:01:50.089133 waagent[1812]: 2025-01-29T12:01:50.089012Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 12:01:50.097040 waagent[1812]: 2025-01-29T12:01:50.089370Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 12:01:50.097040 waagent[1812]: 2025-01-29T12:01:50.093808Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 12:01:50.108646 waagent[1812]: 2025-01-29T12:01:50.108592Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 12:01:50.122262 waagent[1812]: 2025-01-29T12:01:50.109409Z INFO Daemon Jan 29 12:01:50.122262 waagent[1812]: 2025-01-29T12:01:50.110238Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 56d3eec5-4932-453c-9178-19236d5207c4 eTag: 10237873263056152029 source: Fabric] Jan 29 12:01:50.122262 waagent[1812]: 2025-01-29T12:01:50.111242Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 12:01:50.122262 waagent[1812]: 2025-01-29T12:01:50.112206Z INFO Daemon Jan 29 12:01:50.122262 waagent[1812]: 2025-01-29T12:01:50.112837Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 12:01:50.125969 waagent[1812]: 2025-01-29T12:01:50.125921Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 12:01:50.209126 waagent[1812]: 2025-01-29T12:01:50.208965Z INFO Daemon Downloaded certificate {'thumbprint': '8800A0A5EA6674EE6BB1A29275DA1F8A3CC04B7F', 'hasPrivateKey': False} Jan 29 12:01:50.225930 waagent[1812]: 2025-01-29T12:01:50.209642Z INFO Daemon Downloaded certificate {'thumbprint': '0527D424005CB084807EB7909CF44E26E1684BE8', 'hasPrivateKey': True} Jan 29 12:01:50.225930 waagent[1812]: 2025-01-29T12:01:50.210145Z INFO Daemon Fetch goal state completed Jan 29 12:01:50.225930 waagent[1812]: 2025-01-29T12:01:50.218013Z INFO Daemon Daemon Starting provisioning Jan 29 12:01:50.225930 waagent[1812]: 2025-01-29T12:01:50.218574Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 12:01:50.225930 waagent[1812]: 2025-01-29T12:01:50.219227Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-56ab0c4267] Jan 29 12:01:50.234600 waagent[1812]: 2025-01-29T12:01:50.234532Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-56ab0c4267] Jan 29 12:01:50.237688 waagent[1812]: 2025-01-29T12:01:50.235061Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 12:01:50.237688 waagent[1812]: 2025-01-29T12:01:50.235802Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 12:01:50.259830 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:50.259842 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:50.259898 systemd-networkd[1452]: eth0: DHCP lease lost Jan 29 12:01:50.261255 waagent[1812]: 2025-01-29T12:01:50.261144Z INFO Daemon Daemon Create user account if not exists Jan 29 12:01:50.275306 waagent[1812]: 2025-01-29T12:01:50.261578Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 12:01:50.275306 waagent[1812]: 2025-01-29T12:01:50.262410Z INFO Daemon Daemon Configure sudoer Jan 29 12:01:50.275306 waagent[1812]: 2025-01-29T12:01:50.263421Z INFO Daemon Daemon Configure sshd Jan 29 12:01:50.275306 waagent[1812]: 2025-01-29T12:01:50.264445Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 12:01:50.275306 waagent[1812]: 2025-01-29T12:01:50.264963Z INFO Daemon Daemon Deploy ssh public key. Jan 29 12:01:50.277562 systemd-networkd[1452]: eth0: DHCPv6 lease lost Jan 29 12:01:50.318557 systemd-networkd[1452]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:51.380012 waagent[1812]: 2025-01-29T12:01:51.379938Z INFO Daemon Daemon Provisioning complete Jan 29 12:01:51.392812 waagent[1812]: 2025-01-29T12:01:51.392748Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 12:01:51.398981 waagent[1812]: 2025-01-29T12:01:51.393058Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 12:01:51.398981 waagent[1812]: 2025-01-29T12:01:51.393938Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 12:01:51.520925 waagent[1896]: 2025-01-29T12:01:51.520809Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 12:01:51.521344 waagent[1896]: 2025-01-29T12:01:51.520986Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 29 12:01:51.521344 waagent[1896]: 2025-01-29T12:01:51.521068Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 29 12:01:51.570302 waagent[1896]: 2025-01-29T12:01:51.570188Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 12:01:51.570743 waagent[1896]: 2025-01-29T12:01:51.570633Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:01:51.570846 waagent[1896]: 2025-01-29T12:01:51.570796Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:01:51.579585 waagent[1896]: 2025-01-29T12:01:51.579502Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 12:01:51.585291 waagent[1896]: 2025-01-29T12:01:51.585229Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 12:01:51.585766 waagent[1896]: 2025-01-29T12:01:51.585710Z INFO ExtHandler Jan 29 12:01:51.585851 waagent[1896]: 2025-01-29T12:01:51.585804Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e31c55e7-de15-49e0-9a21-08e7171a8139 eTag: 10237873263056152029 source: Fabric] Jan 29 12:01:51.586157 waagent[1896]: 2025-01-29T12:01:51.586111Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 12:01:51.586741 waagent[1896]: 2025-01-29T12:01:51.586685Z INFO ExtHandler Jan 29 12:01:51.586806 waagent[1896]: 2025-01-29T12:01:51.586770Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 12:01:51.590261 waagent[1896]: 2025-01-29T12:01:51.590211Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 12:01:51.669179 waagent[1896]: 2025-01-29T12:01:51.669019Z INFO ExtHandler Downloaded certificate {'thumbprint': '8800A0A5EA6674EE6BB1A29275DA1F8A3CC04B7F', 'hasPrivateKey': False} Jan 29 12:01:51.669627 waagent[1896]: 2025-01-29T12:01:51.669559Z INFO ExtHandler Downloaded certificate {'thumbprint': '0527D424005CB084807EB7909CF44E26E1684BE8', 'hasPrivateKey': True} Jan 29 12:01:51.670086 waagent[1896]: 2025-01-29T12:01:51.670028Z INFO ExtHandler Fetch goal state completed Jan 29 12:01:51.686061 waagent[1896]: 2025-01-29T12:01:51.685984Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1896 Jan 29 12:01:51.686227 waagent[1896]: 2025-01-29T12:01:51.686176Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 12:01:51.687846 waagent[1896]: 2025-01-29T12:01:51.687785Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 12:01:51.688220 waagent[1896]: 2025-01-29T12:01:51.688171Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 12:01:51.720082 waagent[1896]: 2025-01-29T12:01:51.720026Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 12:01:51.720355 waagent[1896]: 2025-01-29T12:01:51.720299Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 12:01:51.727407 waagent[1896]: 2025-01-29T12:01:51.727358Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 12:01:51.734386 systemd[1]: Reloading requested from client PID 1911 ('systemctl') (unit waagent.service)... Jan 29 12:01:51.734403 systemd[1]: Reloading... Jan 29 12:01:51.834560 zram_generator::config[1951]: No configuration found. Jan 29 12:01:51.946581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:52.027046 systemd[1]: Reloading finished in 292 ms. Jan 29 12:01:52.054525 waagent[1896]: 2025-01-29T12:01:52.052428Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 12:01:52.061778 systemd[1]: Reloading requested from client PID 2001 ('systemctl') (unit waagent.service)... Jan 29 12:01:52.061795 systemd[1]: Reloading... Jan 29 12:01:52.140525 zram_generator::config[2032]: No configuration found. Jan 29 12:01:52.273728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:52.355602 systemd[1]: Reloading finished in 293 ms. Jan 29 12:01:52.383493 waagent[1896]: 2025-01-29T12:01:52.381893Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 12:01:52.383493 waagent[1896]: 2025-01-29T12:01:52.382134Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 12:01:53.393049 waagent[1896]: 2025-01-29T12:01:53.392935Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 12:01:53.393986 waagent[1896]: 2025-01-29T12:01:53.393911Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 12:01:53.396914 waagent[1896]: 2025-01-29T12:01:53.396847Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 12:01:53.396989 waagent[1896]: 2025-01-29T12:01:53.396923Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:01:53.397091 waagent[1896]: 2025-01-29T12:01:53.397031Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:01:53.397545 waagent[1896]: 2025-01-29T12:01:53.397481Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 12:01:53.397677 waagent[1896]: 2025-01-29T12:01:53.397625Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:01:53.397914 waagent[1896]: 2025-01-29T12:01:53.397862Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 12:01:53.398270 waagent[1896]: 2025-01-29T12:01:53.398203Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 12:01:53.398530 waagent[1896]: 2025-01-29T12:01:53.398460Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:01:53.398736 waagent[1896]: 2025-01-29T12:01:53.398680Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 12:01:53.398956 waagent[1896]: 2025-01-29T12:01:53.398910Z INFO EnvHandler ExtHandler Configure routes Jan 29 12:01:53.399089 waagent[1896]: 2025-01-29T12:01:53.399044Z INFO EnvHandler ExtHandler Gateway:None Jan 29 12:01:53.399158 waagent[1896]: 2025-01-29T12:01:53.399113Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 12:01:53.399158 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 12:01:53.399158 waagent[1896]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 12:01:53.399158 waagent[1896]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 12:01:53.399158 waagent[1896]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:01:53.399158 waagent[1896]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:01:53.399158 waagent[1896]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:01:53.399788 waagent[1896]: 2025-01-29T12:01:53.399544Z INFO EnvHandler ExtHandler Routes:None Jan 29 12:01:53.400344 waagent[1896]: 2025-01-29T12:01:53.400183Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 12:01:53.400344 waagent[1896]: 2025-01-29T12:01:53.400299Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 12:01:53.400547 waagent[1896]: 2025-01-29T12:01:53.400493Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 12:01:53.406930 waagent[1896]: 2025-01-29T12:01:53.406879Z INFO ExtHandler ExtHandler Jan 29 12:01:53.407045 waagent[1896]: 2025-01-29T12:01:53.407007Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 975ae930-7109-4a29-ad97-d7698e9a443b correlation 9bae1a17-f93a-4fa3-bbe7-2e85b3cbaef4 created: 2025-01-29T12:00:46.580124Z] Jan 29 12:01:53.407423 waagent[1896]: 2025-01-29T12:01:53.407376Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 12:01:53.408001 waagent[1896]: 2025-01-29T12:01:53.407955Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 29 12:01:53.443889 waagent[1896]: 2025-01-29T12:01:53.443708Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BAD5D6DD-256A-4F2C-AD37-4E140E0E218F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 12:01:53.522546 waagent[1896]: 2025-01-29T12:01:53.522443Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 12:01:53.522546 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.522546 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.522546 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.522546 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.522546 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.522546 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.522546 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:01:53.522546 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:01:53.522546 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:01:53.525963 waagent[1896]: 2025-01-29T12:01:53.525898Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 12:01:53.525963 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.525963 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.525963 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.525963 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.525963 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:01:53.525963 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:01:53.525963 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:01:53.525963 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:01:53.525963 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:01:53.526362 waagent[1896]: 2025-01-29T12:01:53.526218Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 12:01:53.538809 waagent[1896]: 2025-01-29T12:01:53.538739Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 12:01:53.538809 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Jan 29 12:01:53.538809 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 12:01:53.538809 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:a3:54 brd ff:ff:ff:ff:ff:ff Jan 29 12:01:53.538809 waagent[1896]: 3: enP20258s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:a3:54 brd ff:ff:ff:ff:ff:ff\ altname enP20258p0s2 Jan 29 12:01:53.538809 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 12:01:53.538809 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 12:01:53.538809 waagent[1896]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 12:01:53.538809 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 12:01:53.538809 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 12:01:53.538809 waagent[1896]: 2: eth0 inet6 fe80::20d:3aff:feb8:a354/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:01:53.538809 waagent[1896]: 3: enP20258s1 inet6 fe80::20d:3aff:feb8:a354/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:01:59.453334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:01:59.458735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:59.581867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:59.594946 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:00.122311 kubelet[2132]: E0129 12:02:00.122249 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:00.126018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:00.126216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:10.376872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:02:10.388735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:10.495237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:10.506834 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:10.543728 kubelet[2147]: E0129 12:02:10.543655 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:10.546324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:10.546552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:10.836454 chronyd[1703]: Selected source PHC0 Jan 29 12:02:20.797128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:02:20.802750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:20.903056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:20.908024 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:21.499587 kubelet[2161]: E0129 12:02:21.459754 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:21.462111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:21.462317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:24.297734 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:02:24.306779 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.16.10:59402.service - OpenSSH per-connection server daemon (10.200.16.10:59402). Jan 29 12:02:25.034022 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 59402 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:25.036071 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:25.040144 systemd-logind[1684]: New session 3 of user core. Jan 29 12:02:25.050682 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:02:25.608579 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.16.10:59418.service - OpenSSH per-connection server daemon (10.200.16.10:59418). Jan 29 12:02:26.257303 sshd[2173]: Accepted publickey for core from 10.200.16.10 port 59418 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:26.259072 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:26.264766 systemd-logind[1684]: New session 4 of user core. Jan 29 12:02:26.270671 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:02:26.719560 sshd[2173]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:26.723913 systemd[1]: sshd@1-10.200.8.17:22-10.200.16.10:59418.service: Deactivated successfully. Jan 29 12:02:26.725908 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:02:26.726650 systemd-logind[1684]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:02:26.727723 systemd-logind[1684]: Removed session 4. Jan 29 12:02:26.834536 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.16.10:42308.service - OpenSSH per-connection server daemon (10.200.16.10:42308). Jan 29 12:02:27.485505 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 42308 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:27.487352 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:27.492297 systemd-logind[1684]: New session 5 of user core. Jan 29 12:02:27.506660 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:02:27.943436 sshd[2180]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:27.948049 systemd[1]: sshd@2-10.200.8.17:22-10.200.16.10:42308.service: Deactivated successfully. Jan 29 12:02:27.950203 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:02:27.951136 systemd-logind[1684]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:02:27.952279 systemd-logind[1684]: Removed session 5. Jan 29 12:02:28.058607 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.16.10:42316.service - OpenSSH per-connection server daemon (10.200.16.10:42316). Jan 29 12:02:28.709851 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 42316 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:28.711759 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:28.716983 systemd-logind[1684]: New session 6 of user core. Jan 29 12:02:28.723679 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:02:29.176090 sshd[2187]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:29.179589 systemd[1]: sshd@3-10.200.8.17:22-10.200.16.10:42316.service: Deactivated successfully. Jan 29 12:02:29.181798 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:02:29.183410 systemd-logind[1684]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:02:29.184425 systemd-logind[1684]: Removed session 6. Jan 29 12:02:29.293491 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.16.10:42324.service - OpenSSH per-connection server daemon (10.200.16.10:42324). Jan 29 12:02:29.943116 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 42324 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:29.944966 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:29.950713 systemd-logind[1684]: New session 7 of user core. Jan 29 12:02:29.960633 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:02:30.276788 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 29 12:02:30.419631 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:02:30.420112 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:30.451071 sudo[2197]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:30.560268 sshd[2194]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:30.563620 systemd[1]: sshd@4-10.200.8.17:22-10.200.16.10:42324.service: Deactivated successfully. Jan 29 12:02:30.565865 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:02:30.567288 systemd-logind[1684]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:02:30.568540 systemd-logind[1684]: Removed session 7. Jan 29 12:02:30.678813 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.16.10:42338.service - OpenSSH per-connection server daemon (10.200.16.10:42338). Jan 29 12:02:31.323789 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 42338 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:31.325430 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:31.329651 systemd-logind[1684]: New session 8 of user core. Jan 29 12:02:31.336664 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:02:31.613300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:02:31.618742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:31.683609 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:02:31.684050 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:31.690411 sudo[2209]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:31.698684 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:02:31.699104 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:31.720674 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:31.729275 auditctl[2214]: No rules Jan 29 12:02:31.730527 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:02:31.731835 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:31.733226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:31.741067 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:31.742601 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:31.775981 augenrules[2241]: No rules Jan 29 12:02:31.777634 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:31.779323 sudo[2208]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:31.885091 sshd[2202]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:31.889119 systemd[1]: sshd@5-10.200.8.17:22-10.200.16.10:42338.service: Deactivated successfully. Jan 29 12:02:31.891257 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:02:31.892841 systemd-logind[1684]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:02:31.893855 systemd-logind[1684]: Removed session 8. Jan 29 12:02:32.006119 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.16.10:42340.service - OpenSSH per-connection server daemon (10.200.16.10:42340). Jan 29 12:02:32.214392 update_engine[1685]: I20250129 12:02:32.214174 1685 update_attempter.cc:509] Updating boot flags... Jan 29 12:02:32.284780 kubelet[2220]: E0129 12:02:32.284634 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:32.287131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:32.287454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:32.315927 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2264) Jan 29 12:02:32.427628 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2268) Jan 29 12:02:32.651088 sshd[2249]: Accepted publickey for core from 10.200.16.10 port 42340 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:32.652958 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:32.658733 systemd-logind[1684]: New session 9 of user core. Jan 29 12:02:32.668617 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:02:33.009050 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:02:33.009730 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:34.320819 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:02:34.323299 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:02:36.034705 dockerd[2335]: time="2025-01-29T12:02:36.034635688Z" level=info msg="Starting up" Jan 29 12:02:36.445646 dockerd[2335]: time="2025-01-29T12:02:36.445578035Z" level=info msg="Loading containers: start." Jan 29 12:02:36.614536 kernel: Initializing XFRM netlink socket Jan 29 12:02:36.744968 systemd-networkd[1452]: docker0: Link UP Jan 29 12:02:36.774948 dockerd[2335]: time="2025-01-29T12:02:36.774895946Z" level=info msg="Loading containers: done." Jan 29 12:02:36.852958 dockerd[2335]: time="2025-01-29T12:02:36.852890286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:02:36.853495 dockerd[2335]: time="2025-01-29T12:02:36.853267396Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:02:36.853790 dockerd[2335]: time="2025-01-29T12:02:36.853752209Z" level=info msg="Daemon has completed initialization" Jan 29 12:02:36.919636 dockerd[2335]: time="2025-01-29T12:02:36.919491228Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:02:36.920103 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:02:38.024136 containerd[1705]: time="2025-01-29T12:02:38.024084013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 12:02:38.726546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092487807.mount: Deactivated successfully. Jan 29 12:02:40.493841 containerd[1705]: time="2025-01-29T12:02:40.493779297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:40.500224 containerd[1705]: time="2025-01-29T12:02:40.500153463Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 29 12:02:40.506024 containerd[1705]: time="2025-01-29T12:02:40.505950315Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:40.509973 containerd[1705]: time="2025-01-29T12:02:40.509902118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:40.511153 containerd[1705]: time="2025-01-29T12:02:40.510937745Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.486810131s" Jan 29 12:02:40.511153 containerd[1705]: time="2025-01-29T12:02:40.510985946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 12:02:40.512746 containerd[1705]: time="2025-01-29T12:02:40.512714592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 12:02:42.110833 containerd[1705]: time="2025-01-29T12:02:42.110769681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.113798 containerd[1705]: time="2025-01-29T12:02:42.113730859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 29 12:02:42.117818 containerd[1705]: time="2025-01-29T12:02:42.117761764Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.123921 containerd[1705]: time="2025-01-29T12:02:42.123881924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:42.125053 containerd[1705]: time="2025-01-29T12:02:42.124879250Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.612127858s" Jan 29 12:02:42.125053 containerd[1705]: time="2025-01-29T12:02:42.124922151Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 12:02:42.125888 containerd[1705]: time="2025-01-29T12:02:42.125828175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 12:02:42.363423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:02:42.369042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:43.035640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:43.047826 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:43.128461 kubelet[2538]: E0129 12:02:43.128374 2538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:43.130794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:43.131019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:44.168938 containerd[1705]: time="2025-01-29T12:02:44.168875643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:44.171795 containerd[1705]: time="2025-01-29T12:02:44.171729011Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 29 12:02:44.177053 containerd[1705]: time="2025-01-29T12:02:44.176988337Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:44.182599 containerd[1705]: time="2025-01-29T12:02:44.182549570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:44.183775 containerd[1705]: time="2025-01-29T12:02:44.183616995Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 2.057729519s" Jan 29 12:02:44.183775 containerd[1705]: time="2025-01-29T12:02:44.183662997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 12:02:44.184434 containerd[1705]: time="2025-01-29T12:02:44.184401814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 12:02:45.371287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427066443.mount: Deactivated successfully. Jan 29 12:02:45.872004 containerd[1705]: time="2025-01-29T12:02:45.871938288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.876808 containerd[1705]: time="2025-01-29T12:02:45.876727303Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 12:02:45.879419 containerd[1705]: time="2025-01-29T12:02:45.879354765Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.883284 containerd[1705]: time="2025-01-29T12:02:45.883216358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:45.884288 containerd[1705]: time="2025-01-29T12:02:45.883826572Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.699381657s" Jan 29 12:02:45.884288 containerd[1705]: time="2025-01-29T12:02:45.883867573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 12:02:45.884461 containerd[1705]: time="2025-01-29T12:02:45.884393286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:02:46.614721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714961978.mount: Deactivated successfully. Jan 29 12:02:47.855121 containerd[1705]: time="2025-01-29T12:02:47.855061334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.859096 containerd[1705]: time="2025-01-29T12:02:47.859035229Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:02:47.865021 containerd[1705]: time="2025-01-29T12:02:47.864760866Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.871056 containerd[1705]: time="2025-01-29T12:02:47.870989415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.872229 containerd[1705]: time="2025-01-29T12:02:47.872053640Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.987627254s" Jan 29 12:02:47.872229 containerd[1705]: time="2025-01-29T12:02:47.872095341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:02:47.873168 containerd[1705]: time="2025-01-29T12:02:47.872816458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:02:48.491152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539908228.mount: Deactivated successfully. Jan 29 12:02:48.519098 containerd[1705]: time="2025-01-29T12:02:48.519032519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.522053 containerd[1705]: time="2025-01-29T12:02:48.521975589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 29 12:02:48.526451 containerd[1705]: time="2025-01-29T12:02:48.526382895Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.532594 containerd[1705]: time="2025-01-29T12:02:48.532519342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.533424 containerd[1705]: time="2025-01-29T12:02:48.533254259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 660.3993ms" Jan 29 12:02:48.533424 containerd[1705]: time="2025-01-29T12:02:48.533299160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:02:48.534236 containerd[1705]: time="2025-01-29T12:02:48.534198782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 12:02:49.201090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715827615.mount: Deactivated successfully. Jan 29 12:02:51.530585 containerd[1705]: time="2025-01-29T12:02:51.530524502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.534230 containerd[1705]: time="2025-01-29T12:02:51.534152384Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 29 12:02:51.538007 containerd[1705]: time="2025-01-29T12:02:51.537940468Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.547840 containerd[1705]: time="2025-01-29T12:02:51.547752988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:51.549129 containerd[1705]: time="2025-01-29T12:02:51.548947115Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.014706733s" Jan 29 12:02:51.549129 containerd[1705]: time="2025-01-29T12:02:51.548995316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 12:02:53.363211 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:02:53.374596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:53.504901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:53.519035 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:54.050633 kubelet[2686]: E0129 12:02:54.050550 2686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:54.054536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:54.054874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:54.730673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:54.739881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:54.773799 systemd[1]: Reloading requested from client PID 2702 ('systemctl') (unit session-9.scope)... Jan 29 12:02:54.773817 systemd[1]: Reloading... Jan 29 12:02:54.884737 zram_generator::config[2741]: No configuration found. Jan 29 12:02:55.030695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:02:55.124868 systemd[1]: Reloading finished in 350 ms. Jan 29 12:02:55.172790 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:02:55.172928 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:02:55.173226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:55.177963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:55.451837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:55.458498 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:02:55.495504 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:55.495504 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:02:55.495504 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:56.095680 kubelet[2809]: I0129 12:02:56.095251 2809 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:02:56.432431 kubelet[2809]: I0129 12:02:56.432381 2809 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:02:56.432431 kubelet[2809]: I0129 12:02:56.432415 2809 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:02:56.432779 kubelet[2809]: I0129 12:02:56.432757 2809 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:02:56.454420 kubelet[2809]: I0129 12:02:56.454251 2809 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:02:56.454717 kubelet[2809]: E0129 12:02:56.454625 2809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:56.463408 kubelet[2809]: E0129 12:02:56.463357 2809 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:02:56.463408 kubelet[2809]: I0129 12:02:56.463399 2809 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:02:56.468206 kubelet[2809]: I0129 12:02:56.468179 2809 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:02:56.468346 kubelet[2809]: I0129 12:02:56.468307 2809 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:02:56.468536 kubelet[2809]: I0129 12:02:56.468490 2809 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:02:56.468731 kubelet[2809]: I0129 12:02:56.468527 2809 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-56ab0c4267","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:02:56.468885 kubelet[2809]: I0129 12:02:56.468745 2809 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:02:56.468885 kubelet[2809]: I0129 12:02:56.468757 2809 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:02:56.468967 kubelet[2809]: I0129 12:02:56.468917 2809 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:56.471550 kubelet[2809]: I0129 12:02:56.471268 2809 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:02:56.471550 kubelet[2809]: I0129 12:02:56.471299 2809 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:02:56.471550 kubelet[2809]: I0129 12:02:56.471340 2809 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:02:56.471550 kubelet[2809]: I0129 12:02:56.471360 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:02:56.478490 kubelet[2809]: W0129 12:02:56.477627 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-56ab0c4267&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:56.478490 kubelet[2809]: E0129 12:02:56.477701 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-56ab0c4267&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:56.478490 kubelet[2809]: W0129 12:02:56.478169 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:56.478490 kubelet[2809]: E0129 12:02:56.478226 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:56.478490 kubelet[2809]: I0129 12:02:56.478339 2809 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:02:56.480316 kubelet[2809]: I0129 12:02:56.480283 2809 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:02:56.481255 kubelet[2809]: W0129 12:02:56.481221 2809 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:02:56.482863 kubelet[2809]: I0129 12:02:56.482711 2809 server.go:1269] "Started kubelet" Jan 29 12:02:56.485012 kubelet[2809]: I0129 12:02:56.484832 2809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:02:56.486177 kubelet[2809]: I0129 12:02:56.485957 2809 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:02:56.488728 kubelet[2809]: I0129 12:02:56.488512 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:02:56.489517 kubelet[2809]: I0129 12:02:56.488981 2809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:02:56.489517 kubelet[2809]: I0129 12:02:56.489232 2809 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:02:56.493487 kubelet[2809]: E0129 12:02:56.489413 2809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-56ab0c4267.181f283204d28d5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-56ab0c4267,UID:ci-4081.3.0-a-56ab0c4267,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-56ab0c4267,},FirstTimestamp:2025-01-29 12:02:56.482684251 +0000 UTC m=+1.019988553,LastTimestamp:2025-01-29 12:02:56.482684251 +0000 UTC m=+1.019988553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-56ab0c4267,}" Jan 29 12:02:56.493487 kubelet[2809]: I0129 12:02:56.493115 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:02:56.495588 kubelet[2809]: I0129 12:02:56.495567 2809 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:02:56.495942 kubelet[2809]: E0129 12:02:56.495813 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:02:56.498099 kubelet[2809]: I0129 12:02:56.498078 2809 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:02:56.498322 kubelet[2809]: I0129 12:02:56.498299 2809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:02:56.499430 kubelet[2809]: I0129 12:02:56.499412 2809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:02:56.499568 kubelet[2809]: I0129 12:02:56.499553 2809 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:02:56.500276 kubelet[2809]: E0129 12:02:56.500249 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-56ab0c4267?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="200ms" Jan 29 12:02:56.500833 kubelet[2809]: I0129 12:02:56.500814 2809 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:02:56.508535 kubelet[2809]: E0129 12:02:56.507448 2809 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:02:56.514317 kubelet[2809]: I0129 12:02:56.514281 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:02:56.516513 kubelet[2809]: I0129 12:02:56.516489 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:02:56.516513 kubelet[2809]: I0129 12:02:56.516516 2809 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:02:56.519491 kubelet[2809]: I0129 12:02:56.516538 2809 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:02:56.519491 kubelet[2809]: E0129 12:02:56.516586 2809 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:02:56.525213 kubelet[2809]: W0129 12:02:56.524582 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:56.525213 kubelet[2809]: E0129 12:02:56.524666 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:56.525527 kubelet[2809]: W0129 12:02:56.525436 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:56.525527 kubelet[2809]: E0129 12:02:56.525506 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:56.595977 kubelet[2809]: E0129 12:02:56.595917 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:02:56.617463 kubelet[2809]: E0129 12:02:56.617394 2809 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:02:56.682305 kubelet[2809]: I0129 12:02:56.682263 2809 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:02:56.682305 kubelet[2809]: I0129 12:02:56.682292 2809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:02:56.682573 kubelet[2809]: I0129 12:02:56.682319 2809 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:56.687493 kubelet[2809]: I0129 12:02:56.687455 2809 policy_none.go:49] "None policy: Start" Jan 29 12:02:56.688565 kubelet[2809]: I0129 12:02:56.688208 2809 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:02:56.688565 kubelet[2809]: I0129 12:02:56.688240 2809 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:02:56.697235 kubelet[2809]: E0129 12:02:56.697167 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:02:56.700423 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:02:56.701391 kubelet[2809]: E0129 12:02:56.701344 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-56ab0c4267?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="400ms" Jan 29 12:02:56.711584 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:02:56.714694 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:02:56.721439 kubelet[2809]: I0129 12:02:56.721210 2809 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:02:56.721557 kubelet[2809]: I0129 12:02:56.721453 2809 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:02:56.721557 kubelet[2809]: I0129 12:02:56.721481 2809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:02:56.722398 kubelet[2809]: I0129 12:02:56.721943 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:02:56.723772 kubelet[2809]: E0129 12:02:56.723751 2809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:02:56.823463 kubelet[2809]: I0129 12:02:56.823310 2809 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.824968 kubelet[2809]: E0129 12:02:56.824936 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.830001 systemd[1]: Created slice kubepods-burstable-pod6c79a95019aa2fae05415503796ddc76.slice - libcontainer container kubepods-burstable-pod6c79a95019aa2fae05415503796ddc76.slice. Jan 29 12:02:56.844647 systemd[1]: Created slice kubepods-burstable-podf492606adad2368e10abd50f61f583c9.slice - libcontainer container kubepods-burstable-podf492606adad2368e10abd50f61f583c9.slice. Jan 29 12:02:56.849061 systemd[1]: Created slice kubepods-burstable-poddb4d678216ab5db2d5a20ee59ee21062.slice - libcontainer container kubepods-burstable-poddb4d678216ab5db2d5a20ee59ee21062.slice. Jan 29 12:02:56.903191 kubelet[2809]: I0129 12:02:56.903063 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903191 kubelet[2809]: I0129 12:02:56.903113 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903191 kubelet[2809]: I0129 12:02:56.903152 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903191 kubelet[2809]: I0129 12:02:56.903181 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903191 kubelet[2809]: I0129 12:02:56.903208 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c79a95019aa2fae05415503796ddc76-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-56ab0c4267\" (UID: \"6c79a95019aa2fae05415503796ddc76\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903573 kubelet[2809]: I0129 12:02:56.903234 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903573 kubelet[2809]: I0129 12:02:56.903259 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903573 kubelet[2809]: I0129 12:02:56.903284 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:56.903573 kubelet[2809]: I0129 12:02:56.903306 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:57.027827 kubelet[2809]: I0129 12:02:57.027700 2809 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:57.028210 kubelet[2809]: E0129 12:02:57.028160 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:57.102386 kubelet[2809]: E0129 12:02:57.102329 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-56ab0c4267?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="800ms" Jan 29 12:02:57.142734 containerd[1705]: time="2025-01-29T12:02:57.142667137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-56ab0c4267,Uid:6c79a95019aa2fae05415503796ddc76,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:57.148358 containerd[1705]: time="2025-01-29T12:02:57.148313063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-56ab0c4267,Uid:f492606adad2368e10abd50f61f583c9,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:57.151957 containerd[1705]: time="2025-01-29T12:02:57.151923044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-56ab0c4267,Uid:db4d678216ab5db2d5a20ee59ee21062,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:57.368643 kubelet[2809]: W0129 12:02:57.368588 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:57.368807 kubelet[2809]: E0129 12:02:57.368655 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:57.427007 kubelet[2809]: W0129 12:02:57.426914 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-56ab0c4267&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:57.427007 kubelet[2809]: E0129 12:02:57.426991 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-56ab0c4267&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:57.430023 kubelet[2809]: I0129 12:02:57.429971 2809 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:57.430349 kubelet[2809]: E0129 12:02:57.430318 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:57.818493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352890365.mount: Deactivated successfully. Jan 29 12:02:57.861550 containerd[1705]: time="2025-01-29T12:02:57.861459741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:57.864238 containerd[1705]: time="2025-01-29T12:02:57.864166101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:02:57.869859 containerd[1705]: time="2025-01-29T12:02:57.869822128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:57.876038 containerd[1705]: time="2025-01-29T12:02:57.876003866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:57.880872 containerd[1705]: time="2025-01-29T12:02:57.880819174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:02:57.884840 containerd[1705]: time="2025-01-29T12:02:57.884801164Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:57.895387 containerd[1705]: time="2025-01-29T12:02:57.895270298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:02:57.903774 kubelet[2809]: E0129 12:02:57.903729 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-56ab0c4267?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="1.6s" Jan 29 12:02:57.916428 containerd[1705]: time="2025-01-29T12:02:57.916347170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:57.917534 containerd[1705]: time="2025-01-29T12:02:57.917233990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.224944ms" Jan 29 12:02:57.918197 containerd[1705]: time="2025-01-29T12:02:57.918086909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 775.32567ms" Jan 29 12:02:57.951058 containerd[1705]: time="2025-01-29T12:02:57.950996647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 802.592681ms" Jan 29 12:02:58.042017 kubelet[2809]: W0129 12:02:58.041878 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:58.042017 kubelet[2809]: E0129 12:02:58.041971 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:58.092669 kubelet[2809]: W0129 12:02:58.091542 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Jan 29 12:02:58.092669 kubelet[2809]: E0129 12:02:58.091621 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:58.233736 kubelet[2809]: I0129 12:02:58.233300 2809 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:58.234020 kubelet[2809]: E0129 12:02:58.233979 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:58.573147 kubelet[2809]: E0129 12:02:58.573104 2809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:02:58.835931 containerd[1705]: time="2025-01-29T12:02:58.835631717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:58.835931 containerd[1705]: time="2025-01-29T12:02:58.835780821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:58.835931 containerd[1705]: time="2025-01-29T12:02:58.835860423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.840133 containerd[1705]: time="2025-01-29T12:02:58.837262456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.843867 containerd[1705]: time="2025-01-29T12:02:58.843364703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:58.843867 containerd[1705]: time="2025-01-29T12:02:58.843432705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:58.843867 containerd[1705]: time="2025-01-29T12:02:58.843499907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.843867 containerd[1705]: time="2025-01-29T12:02:58.843740112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.861570 containerd[1705]: time="2025-01-29T12:02:58.860167508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:58.861570 containerd[1705]: time="2025-01-29T12:02:58.861181333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:58.861570 containerd[1705]: time="2025-01-29T12:02:58.861228034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.861570 containerd[1705]: time="2025-01-29T12:02:58.861361937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:58.889663 systemd[1]: Started cri-containerd-117b3649724278f8479d435b65d0a4d131971d121aa9f521e7fb923893174808.scope - libcontainer container 117b3649724278f8479d435b65d0a4d131971d121aa9f521e7fb923893174808. Jan 29 12:02:58.910643 systemd[1]: Started cri-containerd-63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123.scope - libcontainer container 63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123. Jan 29 12:02:58.915361 systemd[1]: Started cri-containerd-b2548bd1840ccd12cbd0a17371f0ac6c9e474d20250081214b519b80909fa509.scope - libcontainer container b2548bd1840ccd12cbd0a17371f0ac6c9e474d20250081214b519b80909fa509. Jan 29 12:02:58.972495 containerd[1705]: time="2025-01-29T12:02:58.972246864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-56ab0c4267,Uid:db4d678216ab5db2d5a20ee59ee21062,Namespace:kube-system,Attempt:0,} returns sandbox id \"63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123\"" Jan 29 12:02:58.981565 containerd[1705]: time="2025-01-29T12:02:58.981360520Z" level=info msg="CreateContainer within sandbox \"63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:02:58.992497 containerd[1705]: time="2025-01-29T12:02:58.991084986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-56ab0c4267,Uid:f492606adad2368e10abd50f61f583c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"117b3649724278f8479d435b65d0a4d131971d121aa9f521e7fb923893174808\"" Jan 29 12:02:58.994940 containerd[1705]: time="2025-01-29T12:02:58.994764749Z" level=info msg="CreateContainer within sandbox \"117b3649724278f8479d435b65d0a4d131971d121aa9f521e7fb923893174808\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:02:59.011863 containerd[1705]: time="2025-01-29T12:02:59.011819541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-56ab0c4267,Uid:6c79a95019aa2fae05415503796ddc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2548bd1840ccd12cbd0a17371f0ac6c9e474d20250081214b519b80909fa509\"" Jan 29 12:02:59.014416 containerd[1705]: time="2025-01-29T12:02:59.014306983Z" level=info msg="CreateContainer within sandbox \"b2548bd1840ccd12cbd0a17371f0ac6c9e474d20250081214b519b80909fa509\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:02:59.069832 containerd[1705]: time="2025-01-29T12:02:59.069772632Z" level=info msg="CreateContainer within sandbox \"63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b940192ede9454ae9aa75bfec10bb2ef87963622f396455447f7bf8c632177f2\"" Jan 29 12:02:59.070616 containerd[1705]: time="2025-01-29T12:02:59.070576946Z" level=info msg="StartContainer for \"b940192ede9454ae9aa75bfec10bb2ef87963622f396455447f7bf8c632177f2\"" Jan 29 12:02:59.091393 containerd[1705]: time="2025-01-29T12:02:59.091282300Z" level=info msg="CreateContainer within sandbox \"117b3649724278f8479d435b65d0a4d131971d121aa9f521e7fb923893174808\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b2cf114636f3a586979131ffd4f12391cc5dc30a0710a1140e3635f36bcfc35\"" Jan 29 12:02:59.093305 containerd[1705]: time="2025-01-29T12:02:59.092668224Z" level=info msg="StartContainer for \"3b2cf114636f3a586979131ffd4f12391cc5dc30a0710a1140e3635f36bcfc35\"" Jan 29 12:02:59.098709 containerd[1705]: time="2025-01-29T12:02:59.098285720Z" level=info msg="CreateContainer within sandbox \"b2548bd1840ccd12cbd0a17371f0ac6c9e474d20250081214b519b80909fa509\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0b6bedbd4b386ebcd6bf2020dbaaf58b2cc88956845af546034c4b50d4998cd\"" Jan 29 12:02:59.099577 containerd[1705]: time="2025-01-29T12:02:59.099399839Z" level=info msg="StartContainer for \"b0b6bedbd4b386ebcd6bf2020dbaaf58b2cc88956845af546034c4b50d4998cd\"" Jan 29 12:02:59.101103 systemd[1]: Started cri-containerd-b940192ede9454ae9aa75bfec10bb2ef87963622f396455447f7bf8c632177f2.scope - libcontainer container b940192ede9454ae9aa75bfec10bb2ef87963622f396455447f7bf8c632177f2. Jan 29 12:02:59.153696 systemd[1]: Started cri-containerd-b0b6bedbd4b386ebcd6bf2020dbaaf58b2cc88956845af546034c4b50d4998cd.scope - libcontainer container b0b6bedbd4b386ebcd6bf2020dbaaf58b2cc88956845af546034c4b50d4998cd. Jan 29 12:02:59.162627 systemd[1]: Started cri-containerd-3b2cf114636f3a586979131ffd4f12391cc5dc30a0710a1140e3635f36bcfc35.scope - libcontainer container 3b2cf114636f3a586979131ffd4f12391cc5dc30a0710a1140e3635f36bcfc35. Jan 29 12:02:59.191225 containerd[1705]: time="2025-01-29T12:02:59.190662900Z" level=info msg="StartContainer for \"b940192ede9454ae9aa75bfec10bb2ef87963622f396455447f7bf8c632177f2\" returns successfully" Jan 29 12:02:59.236266 containerd[1705]: time="2025-01-29T12:02:59.236222079Z" level=info msg="StartContainer for \"b0b6bedbd4b386ebcd6bf2020dbaaf58b2cc88956845af546034c4b50d4998cd\" returns successfully" Jan 29 12:02:59.260655 containerd[1705]: time="2025-01-29T12:02:59.260070787Z" level=info msg="StartContainer for \"3b2cf114636f3a586979131ffd4f12391cc5dc30a0710a1140e3635f36bcfc35\" returns successfully" Jan 29 12:02:59.838567 kubelet[2809]: I0129 12:02:59.838061 2809 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:02:59.861587 systemd[1]: run-containerd-runc-k8s.io-63717c9b3239297a024cd422fd0c3200e72d9734fdce3ea62f1e4fdcf8b76123-runc.Ip0OW0.mount: Deactivated successfully. Jan 29 12:03:01.546780 kubelet[2809]: E0129 12:03:01.546712 2809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-56ab0c4267\" not found" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:01.664494 kubelet[2809]: E0129 12:03:01.663122 2809 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-56ab0c4267.181f283204d28d5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-56ab0c4267,UID:ci-4081.3.0-a-56ab0c4267,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-56ab0c4267,},FirstTimestamp:2025-01-29 12:02:56.482684251 +0000 UTC m=+1.019988553,LastTimestamp:2025-01-29 12:02:56.482684251 +0000 UTC m=+1.019988553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-56ab0c4267,}" Jan 29 12:03:02.708532 kubelet[2809]: I0129 12:03:02.707098 2809 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:02.708532 kubelet[2809]: E0129 12:03:02.707148 2809 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-56ab0c4267\": node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:03:02.708532 kubelet[2809]: E0129 12:03:02.707353 2809 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-56ab0c4267.181f2832064c3479 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-56ab0c4267,UID:ci-4081.3.0-a-56ab0c4267,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-56ab0c4267,},FirstTimestamp:2025-01-29 12:02:56.507434105 +0000 UTC m=+1.044738407,LastTimestamp:2025-01-29 12:02:56.507434105 +0000 UTC m=+1.044738407,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-56ab0c4267,}" Jan 29 12:03:02.732836 kubelet[2809]: W0129 12:03:02.732798 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:03.706072 kubelet[2809]: I0129 12:03:03.705988 2809 apiserver.go:52] "Watching apiserver" Jan 29 12:03:03.800348 kubelet[2809]: I0129 12:03:03.800281 2809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:03:04.089031 systemd[1]: Reloading requested from client PID 3078 ('systemctl') (unit session-9.scope)... Jan 29 12:03:04.089055 systemd[1]: Reloading... Jan 29 12:03:04.239572 zram_generator::config[3130]: No configuration found. Jan 29 12:03:04.356840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:04.454251 systemd[1]: Reloading finished in 364 ms. Jan 29 12:03:04.500406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:04.523130 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:03:04.523393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:04.528915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:04.644414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:04.655937 (kubelet)[3185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:03:04.705820 kubelet[3185]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:04.705820 kubelet[3185]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:03:04.705820 kubelet[3185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:04.706951 kubelet[3185]: I0129 12:03:04.706396 3185 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:03:04.714967 kubelet[3185]: I0129 12:03:04.714925 3185 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:03:04.715168 kubelet[3185]: I0129 12:03:04.715155 3185 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:03:04.715596 kubelet[3185]: I0129 12:03:04.715574 3185 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:03:04.717972 kubelet[3185]: I0129 12:03:04.717947 3185 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:03:04.723499 kubelet[3185]: I0129 12:03:04.723253 3185 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:03:04.726673 kubelet[3185]: E0129 12:03:04.726636 3185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:03:04.726817 kubelet[3185]: I0129 12:03:04.726804 3185 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:03:04.730275 kubelet[3185]: I0129 12:03:04.730257 3185 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:03:04.731489 kubelet[3185]: I0129 12:03:04.730484 3185 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:03:04.731489 kubelet[3185]: I0129 12:03:04.730625 3185 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:03:04.731489 kubelet[3185]: I0129 12:03:04.730653 3185 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-56ab0c4267","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:03:04.731489 kubelet[3185]: I0129 12:03:04.730830 3185 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.730841 3185 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.730878 3185 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.730985 3185 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.730998 3185 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.731024 3185 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:03:04.731785 kubelet[3185]: I0129 12:03:04.731042 3185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:03:04.733014 kubelet[3185]: I0129 12:03:04.732988 3185 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:03:04.734295 kubelet[3185]: I0129 12:03:04.734272 3185 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:03:04.734828 kubelet[3185]: I0129 12:03:04.734806 3185 server.go:1269] "Started kubelet" Jan 29 12:03:04.739457 kubelet[3185]: I0129 12:03:04.739437 3185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:03:04.748613 kubelet[3185]: I0129 12:03:04.747309 3185 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:03:04.749234 kubelet[3185]: I0129 12:03:04.749205 3185 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:03:04.750071 kubelet[3185]: I0129 12:03:04.749801 3185 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:03:04.750263 kubelet[3185]: I0129 12:03:04.750222 3185 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:03:04.752482 kubelet[3185]: I0129 12:03:04.750779 3185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:03:04.753311 kubelet[3185]: I0129 12:03:04.753287 3185 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:03:04.757506 kubelet[3185]: E0129 12:03:04.753597 3185 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-56ab0c4267\" not found" Jan 29 12:03:04.760087 kubelet[3185]: I0129 12:03:04.758238 3185 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:03:04.760087 kubelet[3185]: I0129 12:03:04.758414 3185 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:03:04.764491 kubelet[3185]: I0129 12:03:04.762615 3185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:03:04.764491 kubelet[3185]: I0129 12:03:04.764222 3185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:03:04.764491 kubelet[3185]: I0129 12:03:04.764258 3185 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:03:04.764491 kubelet[3185]: I0129 12:03:04.764289 3185 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:03:04.764491 kubelet[3185]: E0129 12:03:04.764335 3185 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:03:04.778181 kubelet[3185]: I0129 12:03:04.778125 3185 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:03:04.778606 kubelet[3185]: I0129 12:03:04.778573 3185 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:03:04.783840 kubelet[3185]: E0129 12:03:04.783806 3185 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:03:04.785393 kubelet[3185]: I0129 12:03:04.784576 3185 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:03:04.841339 kubelet[3185]: I0129 12:03:04.841297 3185 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:03:04.841339 kubelet[3185]: I0129 12:03:04.841329 3185 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:03:04.841339 kubelet[3185]: I0129 12:03:04.841354 3185 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:04.841752 kubelet[3185]: I0129 12:03:04.841730 3185 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:03:04.841815 kubelet[3185]: I0129 12:03:04.841753 3185 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:03:04.841815 kubelet[3185]: I0129 12:03:04.841783 3185 policy_none.go:49] "None policy: Start" Jan 29 12:03:04.842801 kubelet[3185]: I0129 12:03:04.842776 3185 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:03:04.842801 kubelet[3185]: I0129 12:03:04.842807 3185 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:03:04.843047 kubelet[3185]: I0129 12:03:04.843030 3185 state_mem.go:75] "Updated machine memory state" Jan 29 12:03:04.847179 kubelet[3185]: I0129 12:03:04.847157 3185 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:03:04.847577 kubelet[3185]: I0129 12:03:04.847561 3185 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:03:04.847790 kubelet[3185]: I0129 12:03:04.847750 3185 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:03:04.848491 kubelet[3185]: I0129 12:03:04.848079 3185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:03:04.955518 kubelet[3185]: I0129 12:03:04.953551 3185 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:04.955901 kubelet[3185]: W0129 12:03:04.953700 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:04.956233 kubelet[3185]: W0129 12:03:04.953719 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:04.958828 kubelet[3185]: W0129 12:03:04.958597 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:04.958828 kubelet[3185]: E0129 12:03:04.958658 3185 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:04.958828 kubelet[3185]: I0129 12:03:04.958809 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.000922 kubelet[3185]: I0129 12:03:05.000850 3185 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.001446 kubelet[3185]: I0129 12:03:05.001252 3185 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.059944 kubelet[3185]: I0129 12:03:05.059878 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060140 kubelet[3185]: I0129 12:03:05.059961 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060140 kubelet[3185]: I0129 12:03:05.060049 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060140 kubelet[3185]: I0129 12:03:05.060073 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f492606adad2368e10abd50f61f583c9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" (UID: \"f492606adad2368e10abd50f61f583c9\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060140 kubelet[3185]: I0129 12:03:05.060094 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060140 kubelet[3185]: I0129 12:03:05.060117 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060344 kubelet[3185]: I0129 12:03:05.060146 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db4d678216ab5db2d5a20ee59ee21062-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-56ab0c4267\" (UID: \"db4d678216ab5db2d5a20ee59ee21062\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.060344 kubelet[3185]: I0129 12:03:05.060168 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c79a95019aa2fae05415503796ddc76-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-56ab0c4267\" (UID: \"6c79a95019aa2fae05415503796ddc76\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.739442 kubelet[3185]: I0129 12:03:05.739323 3185 apiserver.go:52] "Watching apiserver" Jan 29 12:03:05.758905 kubelet[3185]: I0129 12:03:05.758852 3185 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:03:05.852153 kubelet[3185]: W0129 12:03:05.852044 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:05.852360 kubelet[3185]: I0129 12:03:05.852251 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" podStartSLOduration=1.852227718 podStartE2EDuration="1.852227718s" podCreationTimestamp="2025-01-29 12:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:05.85187201 +0000 UTC m=+1.190060193" watchObservedRunningTime="2025-01-29 12:03:05.852227718 +0000 UTC m=+1.190415801" Jan 29 12:03:05.852594 kubelet[3185]: E0129 12:03:05.852514 3185 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-56ab0c4267\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:05.912801 kubelet[3185]: I0129 12:03:05.912730 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-56ab0c4267" podStartSLOduration=3.912702146 podStartE2EDuration="3.912702146s" podCreationTimestamp="2025-01-29 12:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:05.897596889 +0000 UTC m=+1.235784972" watchObservedRunningTime="2025-01-29 12:03:05.912702146 +0000 UTC m=+1.250890329" Jan 29 12:03:05.992245 kubelet[3185]: I0129 12:03:05.991965 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-56ab0c4267" podStartSLOduration=1.991941916 podStartE2EDuration="1.991941916s" podCreationTimestamp="2025-01-29 12:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:05.913098655 +0000 UTC m=+1.251286738" watchObservedRunningTime="2025-01-29 12:03:05.991941916 +0000 UTC m=+1.330129999" Jan 29 12:03:08.806423 kubelet[3185]: I0129 12:03:08.806379 3185 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:03:08.809109 containerd[1705]: time="2025-01-29T12:03:08.809060280Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:03:08.811657 kubelet[3185]: I0129 12:03:08.809453 3185 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:03:09.564930 systemd[1]: Created slice kubepods-besteffort-pod5c7c8a15_45ab_4ee6_bf60_b5f365a6e8a1.slice - libcontainer container kubepods-besteffort-pod5c7c8a15_45ab_4ee6_bf60_b5f365a6e8a1.slice. Jan 29 12:03:09.586305 kubelet[3185]: I0129 12:03:09.586156 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-xtables-lock\") pod \"kube-proxy-6xfnw\" (UID: \"5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1\") " pod="kube-system/kube-proxy-6xfnw" Jan 29 12:03:09.586305 kubelet[3185]: I0129 12:03:09.586192 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-lib-modules\") pod \"kube-proxy-6xfnw\" (UID: \"5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1\") " pod="kube-system/kube-proxy-6xfnw" Jan 29 12:03:09.586305 kubelet[3185]: I0129 12:03:09.586211 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfb6\" (UniqueName: \"kubernetes.io/projected/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-kube-api-access-fkfb6\") pod \"kube-proxy-6xfnw\" (UID: \"5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1\") " pod="kube-system/kube-proxy-6xfnw" Jan 29 12:03:09.586305 kubelet[3185]: I0129 12:03:09.586234 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-kube-proxy\") pod \"kube-proxy-6xfnw\" (UID: \"5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1\") " pod="kube-system/kube-proxy-6xfnw" Jan 29 12:03:09.715900 kubelet[3185]: E0129 12:03:09.715854 3185 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:03:09.715900 kubelet[3185]: E0129 12:03:09.715903 3185 projected.go:194] Error preparing data for projected volume kube-api-access-fkfb6 for pod kube-system/kube-proxy-6xfnw: configmap "kube-root-ca.crt" not found Jan 29 12:03:09.716146 kubelet[3185]: E0129 12:03:09.716003 3185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-kube-api-access-fkfb6 podName:5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1 nodeName:}" failed. No retries permitted until 2025-01-29 12:03:10.215966848 +0000 UTC m=+5.554155031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fkfb6" (UniqueName: "kubernetes.io/projected/5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1-kube-api-access-fkfb6") pod "kube-proxy-6xfnw" (UID: "5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1") : configmap "kube-root-ca.crt" not found Jan 29 12:03:09.963535 systemd[1]: Created slice kubepods-besteffort-podee570442_17e5_40a4_8c9e_a923e36b07fc.slice - libcontainer container kubepods-besteffort-podee570442_17e5_40a4_8c9e_a923e36b07fc.slice. Jan 29 12:03:09.988926 kubelet[3185]: I0129 12:03:09.988871 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2kts\" (UniqueName: \"kubernetes.io/projected/ee570442-17e5-40a4-8c9e-a923e36b07fc-kube-api-access-n2kts\") pod \"tigera-operator-76c4976dd7-hwlbn\" (UID: \"ee570442-17e5-40a4-8c9e-a923e36b07fc\") " pod="tigera-operator/tigera-operator-76c4976dd7-hwlbn" Jan 29 12:03:09.988926 kubelet[3185]: I0129 12:03:09.988929 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee570442-17e5-40a4-8c9e-a923e36b07fc-var-lib-calico\") pod \"tigera-operator-76c4976dd7-hwlbn\" (UID: \"ee570442-17e5-40a4-8c9e-a923e36b07fc\") " pod="tigera-operator/tigera-operator-76c4976dd7-hwlbn" Jan 29 12:03:10.268377 containerd[1705]: time="2025-01-29T12:03:10.268219194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hwlbn,Uid:ee570442-17e5-40a4-8c9e-a923e36b07fc,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:03:10.319503 containerd[1705]: time="2025-01-29T12:03:10.318736951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:10.319503 containerd[1705]: time="2025-01-29T12:03:10.318794452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:10.319503 containerd[1705]: time="2025-01-29T12:03:10.318809752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:10.319503 containerd[1705]: time="2025-01-29T12:03:10.318903255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:10.346679 systemd[1]: Started cri-containerd-5ee4fc713b342ab28c01fd6aeef1ad875376506c05aa75ee6a4bf5ad2e44e7d9.scope - libcontainer container 5ee4fc713b342ab28c01fd6aeef1ad875376506c05aa75ee6a4bf5ad2e44e7d9. Jan 29 12:03:10.386155 containerd[1705]: time="2025-01-29T12:03:10.385762486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hwlbn,Uid:ee570442-17e5-40a4-8c9e-a923e36b07fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ee4fc713b342ab28c01fd6aeef1ad875376506c05aa75ee6a4bf5ad2e44e7d9\"" Jan 29 12:03:10.388534 containerd[1705]: time="2025-01-29T12:03:10.388419646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:03:10.474945 containerd[1705]: time="2025-01-29T12:03:10.474894527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xfnw,Uid:5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:10.523264 containerd[1705]: time="2025-01-29T12:03:10.522369014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:10.523264 containerd[1705]: time="2025-01-29T12:03:10.523029129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:10.523264 containerd[1705]: time="2025-01-29T12:03:10.523045629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:10.524701 containerd[1705]: time="2025-01-29T12:03:10.523137231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:10.543654 systemd[1]: Started cri-containerd-9ffa6eca9f242a942085dedd583801480dae5013431e1aa7ce68f48268f8c3b8.scope - libcontainer container 9ffa6eca9f242a942085dedd583801480dae5013431e1aa7ce68f48268f8c3b8. Jan 29 12:03:10.566116 containerd[1705]: time="2025-01-29T12:03:10.566065114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xfnw,Uid:5c7c8a15-45ab-4ee6-bf60-b5f365a6e8a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ffa6eca9f242a942085dedd583801480dae5013431e1aa7ce68f48268f8c3b8\"" Jan 29 12:03:10.569667 containerd[1705]: time="2025-01-29T12:03:10.569537894Z" level=info msg="CreateContainer within sandbox \"9ffa6eca9f242a942085dedd583801480dae5013431e1aa7ce68f48268f8c3b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:03:10.614458 containerd[1705]: time="2025-01-29T12:03:10.614405921Z" level=info msg="CreateContainer within sandbox \"9ffa6eca9f242a942085dedd583801480dae5013431e1aa7ce68f48268f8c3b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02ce1c8befb02d60bd0666eba6ee010a22b39d08b9963d15edf9ecbee29bc26a\"" Jan 29 12:03:10.616702 containerd[1705]: time="2025-01-29T12:03:10.615020735Z" level=info msg="StartContainer for \"02ce1c8befb02d60bd0666eba6ee010a22b39d08b9963d15edf9ecbee29bc26a\"" Jan 29 12:03:10.643687 systemd[1]: Started cri-containerd-02ce1c8befb02d60bd0666eba6ee010a22b39d08b9963d15edf9ecbee29bc26a.scope - libcontainer container 02ce1c8befb02d60bd0666eba6ee010a22b39d08b9963d15edf9ecbee29bc26a. Jan 29 12:03:10.676305 containerd[1705]: time="2025-01-29T12:03:10.675509820Z" level=info msg="StartContainer for \"02ce1c8befb02d60bd0666eba6ee010a22b39d08b9963d15edf9ecbee29bc26a\" returns successfully" Jan 29 12:03:11.367488 kubelet[3185]: I0129 12:03:11.367347 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xfnw" podStartSLOduration=2.367322362 podStartE2EDuration="2.367322362s" podCreationTimestamp="2025-01-29 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:10.844013079 +0000 UTC m=+6.182201162" watchObservedRunningTime="2025-01-29 12:03:11.367322362 +0000 UTC m=+6.705510445" Jan 29 12:03:12.128185 sudo[2319]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:12.184855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285069308.mount: Deactivated successfully. Jan 29 12:03:12.234904 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:12.239554 systemd[1]: sshd@6-10.200.8.17:22-10.200.16.10:42340.service: Deactivated successfully. Jan 29 12:03:12.241749 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:03:12.242183 systemd[1]: session-9.scope: Consumed 4.350s CPU time, 153.7M memory peak, 0B memory swap peak. Jan 29 12:03:12.243427 systemd-logind[1684]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:03:12.245366 systemd-logind[1684]: Removed session 9. Jan 29 12:03:12.801359 containerd[1705]: time="2025-01-29T12:03:12.801283299Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:12.804817 containerd[1705]: time="2025-01-29T12:03:12.804756978Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:03:12.809588 containerd[1705]: time="2025-01-29T12:03:12.809501787Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:12.815314 containerd[1705]: time="2025-01-29T12:03:12.815275419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:12.816560 containerd[1705]: time="2025-01-29T12:03:12.815982635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.427511388s" Jan 29 12:03:12.816560 containerd[1705]: time="2025-01-29T12:03:12.816025636Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:03:12.818548 containerd[1705]: time="2025-01-29T12:03:12.818515293Z" level=info msg="CreateContainer within sandbox \"5ee4fc713b342ab28c01fd6aeef1ad875376506c05aa75ee6a4bf5ad2e44e7d9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:03:12.848790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155208073.mount: Deactivated successfully. Jan 29 12:03:12.854982 containerd[1705]: time="2025-01-29T12:03:12.854834125Z" level=info msg="CreateContainer within sandbox \"5ee4fc713b342ab28c01fd6aeef1ad875376506c05aa75ee6a4bf5ad2e44e7d9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f3915921d1a559067ffcbcbd84f6f0b0d3e3926bbc8a1469e652bbbe4a8e259e\"" Jan 29 12:03:12.856566 containerd[1705]: time="2025-01-29T12:03:12.856521764Z" level=info msg="StartContainer for \"f3915921d1a559067ffcbcbd84f6f0b0d3e3926bbc8a1469e652bbbe4a8e259e\"" Jan 29 12:03:12.897671 systemd[1]: Started cri-containerd-f3915921d1a559067ffcbcbd84f6f0b0d3e3926bbc8a1469e652bbbe4a8e259e.scope - libcontainer container f3915921d1a559067ffcbcbd84f6f0b0d3e3926bbc8a1469e652bbbe4a8e259e. Jan 29 12:03:12.935351 containerd[1705]: time="2025-01-29T12:03:12.935291468Z" level=info msg="StartContainer for \"f3915921d1a559067ffcbcbd84f6f0b0d3e3926bbc8a1469e652bbbe4a8e259e\" returns successfully" Jan 29 12:03:13.878395 kubelet[3185]: I0129 12:03:13.878312 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-hwlbn" podStartSLOduration=2.449025833 podStartE2EDuration="4.878285561s" podCreationTimestamp="2025-01-29 12:03:09 +0000 UTC" firstStartedPulling="2025-01-29 12:03:10.387678629 +0000 UTC m=+5.725866712" lastFinishedPulling="2025-01-29 12:03:12.816938357 +0000 UTC m=+8.155126440" observedRunningTime="2025-01-29 12:03:13.87780995 +0000 UTC m=+9.215998033" watchObservedRunningTime="2025-01-29 12:03:13.878285561 +0000 UTC m=+9.216473744" Jan 29 12:03:16.816120 systemd[1]: Created slice kubepods-besteffort-pod366a2472_9f5d_4e7d_b2cf_7c7e6836ce08.slice - libcontainer container kubepods-besteffort-pod366a2472_9f5d_4e7d_b2cf_7c7e6836ce08.slice. Jan 29 12:03:16.832265 kubelet[3185]: I0129 12:03:16.832208 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/366a2472-9f5d-4e7d-b2cf-7c7e6836ce08-typha-certs\") pod \"calico-typha-78d65d5f6f-fdj7l\" (UID: \"366a2472-9f5d-4e7d-b2cf-7c7e6836ce08\") " pod="calico-system/calico-typha-78d65d5f6f-fdj7l" Jan 29 12:03:16.832265 kubelet[3185]: I0129 12:03:16.832248 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xcr\" (UniqueName: \"kubernetes.io/projected/366a2472-9f5d-4e7d-b2cf-7c7e6836ce08-kube-api-access-n4xcr\") pod \"calico-typha-78d65d5f6f-fdj7l\" (UID: \"366a2472-9f5d-4e7d-b2cf-7c7e6836ce08\") " pod="calico-system/calico-typha-78d65d5f6f-fdj7l" Jan 29 12:03:16.832770 kubelet[3185]: I0129 12:03:16.832282 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366a2472-9f5d-4e7d-b2cf-7c7e6836ce08-tigera-ca-bundle\") pod \"calico-typha-78d65d5f6f-fdj7l\" (UID: \"366a2472-9f5d-4e7d-b2cf-7c7e6836ce08\") " pod="calico-system/calico-typha-78d65d5f6f-fdj7l" Jan 29 12:03:17.126072 containerd[1705]: time="2025-01-29T12:03:17.125997159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78d65d5f6f-fdj7l,Uid:366a2472-9f5d-4e7d-b2cf-7c7e6836ce08,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:17.181537 containerd[1705]: time="2025-01-29T12:03:17.181138235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:17.181537 containerd[1705]: time="2025-01-29T12:03:17.181242438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:17.181821 containerd[1705]: time="2025-01-29T12:03:17.181268439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:17.181821 containerd[1705]: time="2025-01-29T12:03:17.181441543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:17.210681 systemd[1]: Started cri-containerd-97828d69257b3bee83fa2ca639c51f0f331ff248110946ce17d79e41304857de.scope - libcontainer container 97828d69257b3bee83fa2ca639c51f0f331ff248110946ce17d79e41304857de. Jan 29 12:03:17.257325 containerd[1705]: time="2025-01-29T12:03:17.257147033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78d65d5f6f-fdj7l,Uid:366a2472-9f5d-4e7d-b2cf-7c7e6836ce08,Namespace:calico-system,Attempt:0,} returns sandbox id \"97828d69257b3bee83fa2ca639c51f0f331ff248110946ce17d79e41304857de\"" Jan 29 12:03:17.262489 containerd[1705]: time="2025-01-29T12:03:17.262209359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:03:17.358501 systemd[1]: Created slice kubepods-besteffort-podb581a4d2_8b43_4f79_be9f_43a4129be3e2.slice - libcontainer container kubepods-besteffort-podb581a4d2_8b43_4f79_be9f_43a4129be3e2.slice. Jan 29 12:03:17.435058 kubelet[3185]: I0129 12:03:17.434903 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-cni-net-dir\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435058 kubelet[3185]: I0129 12:03:17.434965 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-cni-log-dir\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435058 kubelet[3185]: I0129 12:03:17.434988 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-lib-modules\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435058 kubelet[3185]: I0129 12:03:17.435007 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-cni-bin-dir\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435058 kubelet[3185]: I0129 12:03:17.435029 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b581a4d2-8b43-4f79-be9f-43a4129be3e2-node-certs\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435381 kubelet[3185]: I0129 12:03:17.435052 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-flexvol-driver-host\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435381 kubelet[3185]: I0129 12:03:17.435076 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-var-run-calico\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435381 kubelet[3185]: I0129 12:03:17.435101 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-policysync\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435381 kubelet[3185]: I0129 12:03:17.435125 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b581a4d2-8b43-4f79-be9f-43a4129be3e2-tigera-ca-bundle\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435381 kubelet[3185]: I0129 12:03:17.435144 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-var-lib-calico\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435556 kubelet[3185]: I0129 12:03:17.435163 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkgb\" (UniqueName: \"kubernetes.io/projected/b581a4d2-8b43-4f79-be9f-43a4129be3e2-kube-api-access-pbkgb\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.435556 kubelet[3185]: I0129 12:03:17.435185 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b581a4d2-8b43-4f79-be9f-43a4129be3e2-xtables-lock\") pod \"calico-node-zwkrj\" (UID: \"b581a4d2-8b43-4f79-be9f-43a4129be3e2\") " pod="calico-system/calico-node-zwkrj" Jan 29 12:03:17.540715 kubelet[3185]: E0129 12:03:17.540589 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.540715 kubelet[3185]: W0129 12:03:17.540619 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.540715 kubelet[3185]: E0129 12:03:17.540650 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.636994 kubelet[3185]: E0129 12:03:17.636950 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.636994 kubelet[3185]: W0129 12:03:17.636984 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.637226 kubelet[3185]: E0129 12:03:17.637017 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.648041 kubelet[3185]: E0129 12:03:17.647655 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:17.663983 kubelet[3185]: E0129 12:03:17.663950 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.663983 kubelet[3185]: W0129 12:03:17.663976 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.664179 kubelet[3185]: E0129 12:03:17.664003 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.665545 containerd[1705]: time="2025-01-29T12:03:17.665501525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zwkrj,Uid:b581a4d2-8b43-4f79-be9f-43a4129be3e2,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:17.714861 containerd[1705]: time="2025-01-29T12:03:17.714237541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:17.714861 containerd[1705]: time="2025-01-29T12:03:17.714324843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:17.714861 containerd[1705]: time="2025-01-29T12:03:17.714355444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:17.714861 containerd[1705]: time="2025-01-29T12:03:17.714505548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:17.733719 systemd[1]: Started cri-containerd-15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7.scope - libcontainer container 15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7. Jan 29 12:03:17.736607 kubelet[3185]: E0129 12:03:17.736580 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.736894 kubelet[3185]: W0129 12:03:17.736872 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.736975 kubelet[3185]: E0129 12:03:17.736959 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.737305 kubelet[3185]: E0129 12:03:17.737288 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.737396 kubelet[3185]: W0129 12:03:17.737383 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.737458 kubelet[3185]: E0129 12:03:17.737449 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.738941 kubelet[3185]: E0129 12:03:17.737747 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.739071 kubelet[3185]: W0129 12:03:17.739057 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.739142 kubelet[3185]: E0129 12:03:17.739131 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.739432 kubelet[3185]: E0129 12:03:17.739419 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.739547 kubelet[3185]: W0129 12:03:17.739535 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.739620 kubelet[3185]: E0129 12:03:17.739611 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.739924 kubelet[3185]: E0129 12:03:17.739912 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.740005 kubelet[3185]: W0129 12:03:17.739995 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.740061 kubelet[3185]: E0129 12:03:17.740050 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.741111 kubelet[3185]: E0129 12:03:17.741094 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.741203 kubelet[3185]: W0129 12:03:17.741190 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.741261 kubelet[3185]: E0129 12:03:17.741252 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.741598 kubelet[3185]: E0129 12:03:17.741584 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.741672 kubelet[3185]: W0129 12:03:17.741662 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.741722 kubelet[3185]: E0129 12:03:17.741714 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.742021 kubelet[3185]: E0129 12:03:17.742011 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.742092 kubelet[3185]: W0129 12:03:17.742083 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.742256 kubelet[3185]: E0129 12:03:17.742182 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.742634 kubelet[3185]: E0129 12:03:17.742537 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.742634 kubelet[3185]: W0129 12:03:17.742548 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.742634 kubelet[3185]: E0129 12:03:17.742559 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.742874 kubelet[3185]: E0129 12:03:17.742823 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.742874 kubelet[3185]: W0129 12:03:17.742832 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.742874 kubelet[3185]: E0129 12:03:17.742843 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.743312 kubelet[3185]: E0129 12:03:17.743200 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.743312 kubelet[3185]: W0129 12:03:17.743210 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.743312 kubelet[3185]: E0129 12:03:17.743221 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.743654 kubelet[3185]: E0129 12:03:17.743529 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.743654 kubelet[3185]: W0129 12:03:17.743540 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.743654 kubelet[3185]: E0129 12:03:17.743550 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.743847 kubelet[3185]: E0129 12:03:17.743837 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.743938 kubelet[3185]: W0129 12:03:17.743886 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.743938 kubelet[3185]: E0129 12:03:17.743900 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.744253 kubelet[3185]: E0129 12:03:17.744200 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.744253 kubelet[3185]: W0129 12:03:17.744210 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.744253 kubelet[3185]: E0129 12:03:17.744220 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.744594 kubelet[3185]: E0129 12:03:17.744529 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.744594 kubelet[3185]: W0129 12:03:17.744540 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.744594 kubelet[3185]: E0129 12:03:17.744550 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.745018 kubelet[3185]: E0129 12:03:17.744931 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.745018 kubelet[3185]: W0129 12:03:17.744943 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.745018 kubelet[3185]: E0129 12:03:17.744955 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.745396 kubelet[3185]: E0129 12:03:17.745295 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.745396 kubelet[3185]: W0129 12:03:17.745306 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.745396 kubelet[3185]: E0129 12:03:17.745316 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.745665 kubelet[3185]: E0129 12:03:17.745614 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.745665 kubelet[3185]: W0129 12:03:17.745624 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.745665 kubelet[3185]: E0129 12:03:17.745634 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.745978 kubelet[3185]: E0129 12:03:17.745919 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.745978 kubelet[3185]: W0129 12:03:17.745929 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.745978 kubelet[3185]: E0129 12:03:17.745939 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.746443 kubelet[3185]: E0129 12:03:17.746255 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.746443 kubelet[3185]: W0129 12:03:17.746268 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.746443 kubelet[3185]: E0129 12:03:17.746281 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.746944 kubelet[3185]: E0129 12:03:17.746814 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.746944 kubelet[3185]: W0129 12:03:17.746829 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.746944 kubelet[3185]: E0129 12:03:17.746844 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.746944 kubelet[3185]: I0129 12:03:17.746882 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1aa7e374-f593-4448-a0b1-28887de63262-kubelet-dir\") pod \"csi-node-driver-w9q52\" (UID: \"1aa7e374-f593-4448-a0b1-28887de63262\") " pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:17.747447 kubelet[3185]: E0129 12:03:17.747348 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.747447 kubelet[3185]: W0129 12:03:17.747365 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.747447 kubelet[3185]: E0129 12:03:17.747392 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.747447 kubelet[3185]: I0129 12:03:17.747418 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1aa7e374-f593-4448-a0b1-28887de63262-registration-dir\") pod \"csi-node-driver-w9q52\" (UID: \"1aa7e374-f593-4448-a0b1-28887de63262\") " pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:17.748053 kubelet[3185]: E0129 12:03:17.747918 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.748053 kubelet[3185]: W0129 12:03:17.747936 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.748053 kubelet[3185]: E0129 12:03:17.747954 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.748595 kubelet[3185]: E0129 12:03:17.748346 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.748595 kubelet[3185]: W0129 12:03:17.748360 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.748595 kubelet[3185]: E0129 12:03:17.748386 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.748946 kubelet[3185]: E0129 12:03:17.748777 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.748946 kubelet[3185]: W0129 12:03:17.748792 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.748946 kubelet[3185]: E0129 12:03:17.748813 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.748946 kubelet[3185]: I0129 12:03:17.748839 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lj7\" (UniqueName: \"kubernetes.io/projected/1aa7e374-f593-4448-a0b1-28887de63262-kube-api-access-f5lj7\") pod \"csi-node-driver-w9q52\" (UID: \"1aa7e374-f593-4448-a0b1-28887de63262\") " pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:17.749316 kubelet[3185]: E0129 12:03:17.749221 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.749316 kubelet[3185]: W0129 12:03:17.749236 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.749316 kubelet[3185]: E0129 12:03:17.749264 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.749316 kubelet[3185]: I0129 12:03:17.749295 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1aa7e374-f593-4448-a0b1-28887de63262-varrun\") pod \"csi-node-driver-w9q52\" (UID: \"1aa7e374-f593-4448-a0b1-28887de63262\") " pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:17.749966 kubelet[3185]: E0129 12:03:17.749798 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.749966 kubelet[3185]: W0129 12:03:17.749813 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.749966 kubelet[3185]: E0129 12:03:17.749927 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.750487 kubelet[3185]: E0129 12:03:17.750260 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.750487 kubelet[3185]: W0129 12:03:17.750273 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.750487 kubelet[3185]: E0129 12:03:17.750299 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.750960 kubelet[3185]: E0129 12:03:17.750794 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.750960 kubelet[3185]: W0129 12:03:17.750808 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.750960 kubelet[3185]: E0129 12:03:17.750834 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.750960 kubelet[3185]: I0129 12:03:17.750857 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1aa7e374-f593-4448-a0b1-28887de63262-socket-dir\") pod \"csi-node-driver-w9q52\" (UID: \"1aa7e374-f593-4448-a0b1-28887de63262\") " pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:17.751312 kubelet[3185]: E0129 12:03:17.751219 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.751312 kubelet[3185]: W0129 12:03:17.751233 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.751312 kubelet[3185]: E0129 12:03:17.751260 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.751785 kubelet[3185]: E0129 12:03:17.751673 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.751785 kubelet[3185]: W0129 12:03:17.751688 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.751785 kubelet[3185]: E0129 12:03:17.751702 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.752248 kubelet[3185]: E0129 12:03:17.752132 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.752248 kubelet[3185]: W0129 12:03:17.752147 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.752248 kubelet[3185]: E0129 12:03:17.752164 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.752725 kubelet[3185]: E0129 12:03:17.752584 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.752725 kubelet[3185]: W0129 12:03:17.752599 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.752725 kubelet[3185]: E0129 12:03:17.752611 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.753041 kubelet[3185]: E0129 12:03:17.752923 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.753041 kubelet[3185]: W0129 12:03:17.752936 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.753041 kubelet[3185]: E0129 12:03:17.752951 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.753442 kubelet[3185]: E0129 12:03:17.753329 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.753442 kubelet[3185]: W0129 12:03:17.753342 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.753442 kubelet[3185]: E0129 12:03:17.753357 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.776365 containerd[1705]: time="2025-01-29T12:03:17.776234289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zwkrj,Uid:b581a4d2-8b43-4f79-be9f-43a4129be3e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\"" Jan 29 12:03:17.855974 kubelet[3185]: E0129 12:03:17.855677 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.855974 kubelet[3185]: W0129 12:03:17.855729 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.855974 kubelet[3185]: E0129 12:03:17.855764 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.859947 kubelet[3185]: E0129 12:03:17.857058 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.859947 kubelet[3185]: W0129 12:03:17.857077 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.859947 kubelet[3185]: E0129 12:03:17.857215 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.859947 kubelet[3185]: E0129 12:03:17.858047 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.859947 kubelet[3185]: W0129 12:03:17.858062 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.859947 kubelet[3185]: E0129 12:03:17.858087 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.859947 kubelet[3185]: E0129 12:03:17.859799 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.861336 kubelet[3185]: W0129 12:03:17.859813 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.861336 kubelet[3185]: E0129 12:03:17.860504 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.861762 kubelet[3185]: E0129 12:03:17.861746 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.861869 kubelet[3185]: W0129 12:03:17.861857 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.862368 kubelet[3185]: E0129 12:03:17.862093 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.862368 kubelet[3185]: E0129 12:03:17.862337 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.862368 kubelet[3185]: W0129 12:03:17.862349 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.864998 kubelet[3185]: E0129 12:03:17.864888 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.864998 kubelet[3185]: W0129 12:03:17.864903 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.865344 kubelet[3185]: E0129 12:03:17.865237 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.865344 kubelet[3185]: W0129 12:03:17.865251 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.865637 kubelet[3185]: E0129 12:03:17.865534 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.865637 kubelet[3185]: W0129 12:03:17.865548 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.865901 kubelet[3185]: E0129 12:03:17.865888 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.865994 kubelet[3185]: W0129 12:03:17.865981 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.866165 kubelet[3185]: E0129 12:03:17.866057 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.866347 kubelet[3185]: E0129 12:03:17.866312 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.866347 kubelet[3185]: W0129 12:03:17.866327 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.866347 kubelet[3185]: E0129 12:03:17.866342 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.866911 kubelet[3185]: E0129 12:03:17.866894 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.867294 kubelet[3185]: W0129 12:03:17.867097 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.867294 kubelet[3185]: E0129 12:03:17.867120 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.867503 kubelet[3185]: E0129 12:03:17.867448 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.867570 kubelet[3185]: E0129 12:03:17.867507 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.867570 kubelet[3185]: E0129 12:03:17.867533 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.867570 kubelet[3185]: E0129 12:03:17.867552 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.867905 kubelet[3185]: E0129 12:03:17.867865 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.868070 kubelet[3185]: W0129 12:03:17.867880 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.868070 kubelet[3185]: E0129 12:03:17.867994 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.868505 kubelet[3185]: E0129 12:03:17.868350 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.868505 kubelet[3185]: W0129 12:03:17.868364 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.868505 kubelet[3185]: E0129 12:03:17.868377 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.868680 kubelet[3185]: E0129 12:03:17.868632 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.868680 kubelet[3185]: W0129 12:03:17.868642 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.868680 kubelet[3185]: E0129 12:03:17.868658 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.869199 kubelet[3185]: E0129 12:03:17.868897 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.869199 kubelet[3185]: W0129 12:03:17.868934 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.869199 kubelet[3185]: E0129 12:03:17.868949 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.869199 kubelet[3185]: E0129 12:03:17.869130 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.869199 kubelet[3185]: W0129 12:03:17.869139 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.869199 kubelet[3185]: E0129 12:03:17.869151 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.869459 kubelet[3185]: E0129 12:03:17.869371 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.869459 kubelet[3185]: W0129 12:03:17.869382 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.869459 kubelet[3185]: E0129 12:03:17.869394 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.870290 kubelet[3185]: E0129 12:03:17.869932 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.870290 kubelet[3185]: W0129 12:03:17.869945 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.870290 kubelet[3185]: E0129 12:03:17.869964 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.870878 kubelet[3185]: E0129 12:03:17.870671 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.870878 kubelet[3185]: W0129 12:03:17.870689 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.870878 kubelet[3185]: E0129 12:03:17.870794 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.871603 kubelet[3185]: E0129 12:03:17.871458 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.871603 kubelet[3185]: W0129 12:03:17.871522 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.871603 kubelet[3185]: E0129 12:03:17.871547 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.872425 kubelet[3185]: E0129 12:03:17.872198 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.872425 kubelet[3185]: W0129 12:03:17.872213 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.872425 kubelet[3185]: E0129 12:03:17.872226 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.872666 kubelet[3185]: E0129 12:03:17.872442 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.872666 kubelet[3185]: W0129 12:03:17.872453 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.872666 kubelet[3185]: E0129 12:03:17.872492 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.872802 kubelet[3185]: E0129 12:03:17.872776 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.872802 kubelet[3185]: W0129 12:03:17.872787 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.872802 kubelet[3185]: E0129 12:03:17.872800 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.873198 kubelet[3185]: E0129 12:03:17.873032 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.873198 kubelet[3185]: W0129 12:03:17.873046 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.873198 kubelet[3185]: E0129 12:03:17.873059 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:17.965099 kubelet[3185]: E0129 12:03:17.964511 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:17.965099 kubelet[3185]: W0129 12:03:17.964540 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:17.965099 kubelet[3185]: E0129 12:03:17.964572 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:18.502143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845136123.mount: Deactivated successfully. Jan 29 12:03:18.767831 kubelet[3185]: E0129 12:03:18.767462 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:19.513851 containerd[1705]: time="2025-01-29T12:03:19.513791157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:19.516880 containerd[1705]: time="2025-01-29T12:03:19.516808632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:03:19.521991 containerd[1705]: time="2025-01-29T12:03:19.521958061Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:19.528311 containerd[1705]: time="2025-01-29T12:03:19.528240318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:19.529104 containerd[1705]: time="2025-01-29T12:03:19.528923035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.266665974s" Jan 29 12:03:19.529104 containerd[1705]: time="2025-01-29T12:03:19.528960736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:03:19.530361 containerd[1705]: time="2025-01-29T12:03:19.530329570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:03:19.547509 containerd[1705]: time="2025-01-29T12:03:19.547435097Z" level=info msg="CreateContainer within sandbox \"97828d69257b3bee83fa2ca639c51f0f331ff248110946ce17d79e41304857de\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:03:19.593489 containerd[1705]: time="2025-01-29T12:03:19.593313642Z" level=info msg="CreateContainer within sandbox \"97828d69257b3bee83fa2ca639c51f0f331ff248110946ce17d79e41304857de\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8aab4413067c5543bcf757754eb75af11feb67d2612e415c2ca7018759d69d62\"" Jan 29 12:03:19.594902 containerd[1705]: time="2025-01-29T12:03:19.594856580Z" level=info msg="StartContainer for \"8aab4413067c5543bcf757754eb75af11feb67d2612e415c2ca7018759d69d62\"" Jan 29 12:03:19.634661 systemd[1]: Started cri-containerd-8aab4413067c5543bcf757754eb75af11feb67d2612e415c2ca7018759d69d62.scope - libcontainer container 8aab4413067c5543bcf757754eb75af11feb67d2612e415c2ca7018759d69d62. Jan 29 12:03:19.686780 containerd[1705]: time="2025-01-29T12:03:19.686528869Z" level=info msg="StartContainer for \"8aab4413067c5543bcf757754eb75af11feb67d2612e415c2ca7018759d69d62\" returns successfully" Jan 29 12:03:19.964835 kubelet[3185]: E0129 12:03:19.964755 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.964835 kubelet[3185]: W0129 12:03:19.964791 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.964835 kubelet[3185]: E0129 12:03:19.964827 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965154 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966056 kubelet[3185]: W0129 12:03:19.965172 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965193 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965443 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966056 kubelet[3185]: W0129 12:03:19.965457 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965497 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965753 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966056 kubelet[3185]: W0129 12:03:19.965766 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965780 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966056 kubelet[3185]: E0129 12:03:19.965992 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966685 kubelet[3185]: W0129 12:03:19.966004 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966685 kubelet[3185]: E0129 12:03:19.966017 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966685 kubelet[3185]: E0129 12:03:19.966210 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966685 kubelet[3185]: W0129 12:03:19.966221 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966685 kubelet[3185]: E0129 12:03:19.966233 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.966685 kubelet[3185]: E0129 12:03:19.966543 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.966685 kubelet[3185]: W0129 12:03:19.966556 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.966685 kubelet[3185]: E0129 12:03:19.966570 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.966761 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.967196 kubelet[3185]: W0129 12:03:19.966770 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.966783 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.966978 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.967196 kubelet[3185]: W0129 12:03:19.966988 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.967001 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.967176 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.967196 kubelet[3185]: W0129 12:03:19.967188 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.967196 kubelet[3185]: E0129 12:03:19.967199 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.967781 kubelet[3185]: E0129 12:03:19.967380 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.967781 kubelet[3185]: W0129 12:03:19.967391 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.967781 kubelet[3185]: E0129 12:03:19.967404 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.967781 kubelet[3185]: E0129 12:03:19.967608 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.967781 kubelet[3185]: W0129 12:03:19.967618 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.967781 kubelet[3185]: E0129 12:03:19.967633 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.968125 kubelet[3185]: E0129 12:03:19.967826 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.968125 kubelet[3185]: W0129 12:03:19.967836 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.968125 kubelet[3185]: E0129 12:03:19.967848 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.968125 kubelet[3185]: E0129 12:03:19.968033 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.968125 kubelet[3185]: W0129 12:03:19.968044 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.968125 kubelet[3185]: E0129 12:03:19.968055 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.968388 kubelet[3185]: E0129 12:03:19.968238 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.968388 kubelet[3185]: W0129 12:03:19.968250 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.968388 kubelet[3185]: E0129 12:03:19.968262 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.980827 kubelet[3185]: E0129 12:03:19.980787 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.981165 kubelet[3185]: W0129 12:03:19.980901 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.981165 kubelet[3185]: E0129 12:03:19.980923 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.982110 kubelet[3185]: E0129 12:03:19.981451 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.982110 kubelet[3185]: W0129 12:03:19.981497 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.982110 kubelet[3185]: E0129 12:03:19.981515 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.982110 kubelet[3185]: E0129 12:03:19.981795 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.982110 kubelet[3185]: W0129 12:03:19.981840 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.982110 kubelet[3185]: E0129 12:03:19.981855 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.982421 kubelet[3185]: E0129 12:03:19.982399 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.982479 kubelet[3185]: W0129 12:03:19.982421 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.982479 kubelet[3185]: E0129 12:03:19.982436 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.983239 kubelet[3185]: E0129 12:03:19.983194 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.983239 kubelet[3185]: W0129 12:03:19.983217 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.985486 kubelet[3185]: E0129 12:03:19.983497 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.985486 kubelet[3185]: E0129 12:03:19.984714 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.985486 kubelet[3185]: W0129 12:03:19.984726 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.985486 kubelet[3185]: E0129 12:03:19.984745 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.985486 kubelet[3185]: E0129 12:03:19.985223 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.985486 kubelet[3185]: W0129 12:03:19.985236 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.985486 kubelet[3185]: E0129 12:03:19.985249 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.986817 kubelet[3185]: E0129 12:03:19.986797 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.986817 kubelet[3185]: W0129 12:03:19.986815 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.987008 kubelet[3185]: E0129 12:03:19.986961 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.987364 kubelet[3185]: E0129 12:03:19.987317 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.987364 kubelet[3185]: W0129 12:03:19.987335 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.987572 kubelet[3185]: E0129 12:03:19.987444 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.987752 kubelet[3185]: E0129 12:03:19.987737 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.987826 kubelet[3185]: W0129 12:03:19.987753 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.987826 kubelet[3185]: E0129 12:03:19.987778 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.987983 kubelet[3185]: E0129 12:03:19.987970 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.988034 kubelet[3185]: W0129 12:03:19.987984 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.988034 kubelet[3185]: E0129 12:03:19.988010 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.988227 kubelet[3185]: E0129 12:03:19.988212 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.988320 kubelet[3185]: W0129 12:03:19.988228 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.988320 kubelet[3185]: E0129 12:03:19.988245 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.988952 kubelet[3185]: E0129 12:03:19.988520 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.988952 kubelet[3185]: W0129 12:03:19.988534 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.988952 kubelet[3185]: E0129 12:03:19.988552 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.988952 kubelet[3185]: E0129 12:03:19.988832 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.988952 kubelet[3185]: W0129 12:03:19.988842 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.988952 kubelet[3185]: E0129 12:03:19.988860 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989051 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.990268 kubelet[3185]: W0129 12:03:19.989063 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989076 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989298 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.990268 kubelet[3185]: W0129 12:03:19.989307 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989318 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989685 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.990268 kubelet[3185]: W0129 12:03:19.989695 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.990268 kubelet[3185]: E0129 12:03:19.989709 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:19.991875 kubelet[3185]: E0129 12:03:19.991036 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:19.991875 kubelet[3185]: W0129 12:03:19.991048 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:19.991875 kubelet[3185]: E0129 12:03:19.991061 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.765490 kubelet[3185]: E0129 12:03:20.765328 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:20.876022 containerd[1705]: time="2025-01-29T12:03:20.875944456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:20.879143 containerd[1705]: time="2025-01-29T12:03:20.879069234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:03:20.881607 kubelet[3185]: I0129 12:03:20.881575 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:20.884017 containerd[1705]: time="2025-01-29T12:03:20.883949155Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:20.888663 containerd[1705]: time="2025-01-29T12:03:20.888608572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:20.889492 containerd[1705]: time="2025-01-29T12:03:20.889242188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.358867816s" Jan 29 12:03:20.889492 containerd[1705]: time="2025-01-29T12:03:20.889284289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:03:20.892128 containerd[1705]: time="2025-01-29T12:03:20.892012257Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:03:20.933461 containerd[1705]: time="2025-01-29T12:03:20.933411990Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30\"" Jan 29 12:03:20.935535 containerd[1705]: time="2025-01-29T12:03:20.933909302Z" level=info msg="StartContainer for \"87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30\"" Jan 29 12:03:20.973677 systemd[1]: Started cri-containerd-87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30.scope - libcontainer container 87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30. Jan 29 12:03:20.977493 kubelet[3185]: E0129 12:03:20.977298 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.977493 kubelet[3185]: W0129 12:03:20.977326 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.977493 kubelet[3185]: E0129 12:03:20.977356 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.978038 kubelet[3185]: E0129 12:03:20.977714 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.978038 kubelet[3185]: W0129 12:03:20.977729 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.978038 kubelet[3185]: E0129 12:03:20.977751 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.978177 kubelet[3185]: E0129 12:03:20.978094 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.978177 kubelet[3185]: W0129 12:03:20.978107 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.978177 kubelet[3185]: E0129 12:03:20.978122 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.978390 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.980054 kubelet[3185]: W0129 12:03:20.978419 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.978435 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.978896 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.980054 kubelet[3185]: W0129 12:03:20.978909 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.978927 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.979197 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.980054 kubelet[3185]: W0129 12:03:20.979210 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.979247 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.980054 kubelet[3185]: E0129 12:03:20.979515 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.980586 kubelet[3185]: W0129 12:03:20.979607 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.980586 kubelet[3185]: E0129 12:03:20.979625 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.980586 kubelet[3185]: E0129 12:03:20.980343 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.980586 kubelet[3185]: W0129 12:03:20.980356 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.980586 kubelet[3185]: E0129 12:03:20.980370 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.981505 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.982644 kubelet[3185]: W0129 12:03:20.981531 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.981553 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.981887 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.982644 kubelet[3185]: W0129 12:03:20.981899 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.981910 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.982355 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.982644 kubelet[3185]: W0129 12:03:20.982367 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.982380 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.982644 kubelet[3185]: E0129 12:03:20.982620 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.983736 kubelet[3185]: W0129 12:03:20.982632 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.982645 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.983066 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.983736 kubelet[3185]: W0129 12:03:20.983078 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.983091 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.983355 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.983736 kubelet[3185]: W0129 12:03:20.983367 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.983390 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.983736 kubelet[3185]: E0129 12:03:20.983672 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.983736 kubelet[3185]: W0129 12:03:20.983682 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.984135 kubelet[3185]: E0129 12:03:20.983695 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.988429 kubelet[3185]: E0129 12:03:20.988404 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.988429 kubelet[3185]: W0129 12:03:20.988426 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.988635 kubelet[3185]: E0129 12:03:20.988441 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.989401 kubelet[3185]: E0129 12:03:20.989139 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.989401 kubelet[3185]: W0129 12:03:20.989155 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.989401 kubelet[3185]: E0129 12:03:20.989170 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.989975 kubelet[3185]: E0129 12:03:20.989794 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.989975 kubelet[3185]: W0129 12:03:20.989811 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.989975 kubelet[3185]: E0129 12:03:20.989826 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.990792 kubelet[3185]: E0129 12:03:20.990256 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.990792 kubelet[3185]: W0129 12:03:20.990270 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.990792 kubelet[3185]: E0129 12:03:20.990310 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.990792 kubelet[3185]: E0129 12:03:20.990641 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.990792 kubelet[3185]: W0129 12:03:20.990653 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.990792 kubelet[3185]: E0129 12:03:20.990678 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.991042 kubelet[3185]: E0129 12:03:20.991018 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.991042 kubelet[3185]: W0129 12:03:20.991029 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.991125 kubelet[3185]: E0129 12:03:20.991054 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.991626 kubelet[3185]: E0129 12:03:20.991386 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.991626 kubelet[3185]: W0129 12:03:20.991401 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.992378 kubelet[3185]: E0129 12:03:20.991458 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.993743 kubelet[3185]: E0129 12:03:20.993713 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.993743 kubelet[3185]: W0129 12:03:20.993739 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.993876 kubelet[3185]: E0129 12:03:20.993775 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.994191 kubelet[3185]: E0129 12:03:20.994027 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.994191 kubelet[3185]: W0129 12:03:20.994049 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.994300 kubelet[3185]: E0129 12:03:20.994199 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.995901 kubelet[3185]: E0129 12:03:20.995880 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.995901 kubelet[3185]: W0129 12:03:20.995898 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.996037 kubelet[3185]: E0129 12:03:20.995985 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.996293 kubelet[3185]: E0129 12:03:20.996273 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.996293 kubelet[3185]: W0129 12:03:20.996292 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.996551 kubelet[3185]: E0129 12:03:20.996388 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.996611 kubelet[3185]: E0129 12:03:20.996597 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.996611 kubelet[3185]: W0129 12:03:20.996607 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.997076 kubelet[3185]: E0129 12:03:20.996697 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.997076 kubelet[3185]: E0129 12:03:20.996846 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.997076 kubelet[3185]: W0129 12:03:20.996857 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.997076 kubelet[3185]: E0129 12:03:20.996884 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.997354 kubelet[3185]: E0129 12:03:20.997173 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.997354 kubelet[3185]: W0129 12:03:20.997185 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.997354 kubelet[3185]: E0129 12:03:20.997211 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.998499 kubelet[3185]: E0129 12:03:20.998452 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.998579 kubelet[3185]: W0129 12:03:20.998534 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.998579 kubelet[3185]: E0129 12:03:20.998557 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.999116 kubelet[3185]: E0129 12:03:20.999081 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.999190 kubelet[3185]: W0129 12:03:20.999119 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.999190 kubelet[3185]: E0129 12:03:20.999146 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:20.999893 kubelet[3185]: E0129 12:03:20.999870 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:20.999972 kubelet[3185]: W0129 12:03:20.999908 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:20.999972 kubelet[3185]: E0129 12:03:20.999928 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:21.000504 kubelet[3185]: E0129 12:03:21.000240 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:21.000504 kubelet[3185]: W0129 12:03:21.000254 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:21.000504 kubelet[3185]: E0129 12:03:21.000267 3185 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:21.030007 containerd[1705]: time="2025-01-29T12:03:21.029830297Z" level=info msg="StartContainer for \"87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30\" returns successfully" Jan 29 12:03:21.043841 systemd[1]: cri-containerd-87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30.scope: Deactivated successfully. Jan 29 12:03:21.073091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30-rootfs.mount: Deactivated successfully. Jan 29 12:03:21.912013 kubelet[3185]: I0129 12:03:21.911087 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78d65d5f6f-fdj7l" podStartSLOduration=3.642419968 podStartE2EDuration="5.911062292s" podCreationTimestamp="2025-01-29 12:03:16 +0000 UTC" firstStartedPulling="2025-01-29 12:03:17.261510441 +0000 UTC m=+12.599698524" lastFinishedPulling="2025-01-29 12:03:19.530152665 +0000 UTC m=+14.868340848" observedRunningTime="2025-01-29 12:03:19.896086599 +0000 UTC m=+15.234274782" watchObservedRunningTime="2025-01-29 12:03:21.911062292 +0000 UTC m=+17.249250475" Jan 29 12:03:22.364167 containerd[1705]: time="2025-01-29T12:03:22.364087699Z" level=info msg="shim disconnected" id=87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30 namespace=k8s.io Jan 29 12:03:22.364167 containerd[1705]: time="2025-01-29T12:03:22.364159201Z" level=warning msg="cleaning up after shim disconnected" id=87975f9d8ba899ea4278970c25ff57a4917d2f6221683009cb73a4d538dd0b30 namespace=k8s.io Jan 29 12:03:22.364167 containerd[1705]: time="2025-01-29T12:03:22.364171701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:22.376780 containerd[1705]: time="2025-01-29T12:03:22.376714914Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:03:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:03:22.765866 kubelet[3185]: E0129 12:03:22.765033 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:22.890309 containerd[1705]: time="2025-01-29T12:03:22.890225931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:03:24.767197 kubelet[3185]: E0129 12:03:24.765533 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:26.765977 kubelet[3185]: E0129 12:03:26.765915 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:26.899539 containerd[1705]: time="2025-01-29T12:03:26.899491415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:26.904876 containerd[1705]: time="2025-01-29T12:03:26.904810442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:03:26.908423 containerd[1705]: time="2025-01-29T12:03:26.908353026Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:26.913223 containerd[1705]: time="2025-01-29T12:03:26.913176841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:26.913953 containerd[1705]: time="2025-01-29T12:03:26.913913659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.023183215s" Jan 29 12:03:26.914056 containerd[1705]: time="2025-01-29T12:03:26.913970060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:03:26.917259 containerd[1705]: time="2025-01-29T12:03:26.917076034Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:03:26.959313 containerd[1705]: time="2025-01-29T12:03:26.959212239Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e\"" Jan 29 12:03:26.960730 containerd[1705]: time="2025-01-29T12:03:26.959970357Z" level=info msg="StartContainer for \"54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e\"" Jan 29 12:03:27.010649 systemd[1]: Started cri-containerd-54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e.scope - libcontainer container 54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e. Jan 29 12:03:27.043496 containerd[1705]: time="2025-01-29T12:03:27.043195042Z" level=info msg="StartContainer for \"54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e\" returns successfully" Jan 29 12:03:28.468643 containerd[1705]: time="2025-01-29T12:03:28.468579141Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:28.470715 systemd[1]: cri-containerd-54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e.scope: Deactivated successfully. Jan 29 12:03:28.495781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e-rootfs.mount: Deactivated successfully. Jan 29 12:03:28.541560 kubelet[3185]: I0129 12:03:28.541201 3185 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 12:03:29.007174 kubelet[3185]: I0129 12:03:28.841188 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dk4\" (UniqueName: \"kubernetes.io/projected/adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd-kube-api-access-r7dk4\") pod \"calico-apiserver-575bfcd495-kmblb\" (UID: \"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd\") " pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" Jan 29 12:03:29.007174 kubelet[3185]: I0129 12:03:28.841254 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcs7t\" (UniqueName: \"kubernetes.io/projected/af2b2dc1-1a49-441f-a299-db8f0e304159-kube-api-access-mcs7t\") pod \"coredns-6f6b679f8f-vv8px\" (UID: \"af2b2dc1-1a49-441f-a299-db8f0e304159\") " pod="kube-system/coredns-6f6b679f8f-vv8px" Jan 29 12:03:29.007174 kubelet[3185]: I0129 12:03:28.841285 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd-calico-apiserver-certs\") pod \"calico-apiserver-575bfcd495-kmblb\" (UID: \"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd\") " pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" Jan 29 12:03:29.007174 kubelet[3185]: I0129 12:03:28.841310 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6fb3df-feb3-4fa2-8362-572d6f010fd6-tigera-ca-bundle\") pod \"calico-kube-controllers-9dd89bbdf-wbdmw\" (UID: \"bc6fb3df-feb3-4fa2-8362-572d6f010fd6\") " pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" Jan 29 12:03:29.007174 kubelet[3185]: I0129 12:03:28.841382 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbtk8\" (UniqueName: \"kubernetes.io/projected/94cd193a-be29-45dd-83ae-68e9d8fc5a60-kube-api-access-kbtk8\") pod \"coredns-6f6b679f8f-hpkrn\" (UID: \"94cd193a-be29-45dd-83ae-68e9d8fc5a60\") " pod="kube-system/coredns-6f6b679f8f-hpkrn" Jan 29 12:03:28.673641 systemd[1]: Created slice kubepods-burstable-podaf2b2dc1_1a49_441f_a299_db8f0e304159.slice - libcontainer container kubepods-burstable-podaf2b2dc1_1a49_441f_a299_db8f0e304159.slice. Jan 29 12:03:29.007668 kubelet[3185]: I0129 12:03:28.841477 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94cd193a-be29-45dd-83ae-68e9d8fc5a60-config-volume\") pod \"coredns-6f6b679f8f-hpkrn\" (UID: \"94cd193a-be29-45dd-83ae-68e9d8fc5a60\") " pod="kube-system/coredns-6f6b679f8f-hpkrn" Jan 29 12:03:29.007668 kubelet[3185]: I0129 12:03:28.841521 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2b2dc1-1a49-441f-a299-db8f0e304159-config-volume\") pod \"coredns-6f6b679f8f-vv8px\" (UID: \"af2b2dc1-1a49-441f-a299-db8f0e304159\") " pod="kube-system/coredns-6f6b679f8f-vv8px" Jan 29 12:03:29.007668 kubelet[3185]: I0129 12:03:28.841563 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hnh\" (UniqueName: \"kubernetes.io/projected/eb717d4e-afc9-4dca-98ec-64897dda3bda-kube-api-access-v6hnh\") pod \"calico-apiserver-575bfcd495-6cwr2\" (UID: \"eb717d4e-afc9-4dca-98ec-64897dda3bda\") " pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" Jan 29 12:03:29.007668 kubelet[3185]: I0129 12:03:28.841599 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnq68\" (UniqueName: \"kubernetes.io/projected/bc6fb3df-feb3-4fa2-8362-572d6f010fd6-kube-api-access-nnq68\") pod \"calico-kube-controllers-9dd89bbdf-wbdmw\" (UID: \"bc6fb3df-feb3-4fa2-8362-572d6f010fd6\") " pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" Jan 29 12:03:29.007668 kubelet[3185]: I0129 12:03:28.841623 3185 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb717d4e-afc9-4dca-98ec-64897dda3bda-calico-apiserver-certs\") pod \"calico-apiserver-575bfcd495-6cwr2\" (UID: \"eb717d4e-afc9-4dca-98ec-64897dda3bda\") " pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" Jan 29 12:03:28.719326 systemd[1]: Created slice kubepods-burstable-pod94cd193a_be29_45dd_83ae_68e9d8fc5a60.slice - libcontainer container kubepods-burstable-pod94cd193a_be29_45dd_83ae_68e9d8fc5a60.slice. Jan 29 12:03:28.725998 systemd[1]: Created slice kubepods-besteffort-podadbd1f21_4a0c_4bf4_85d7_11a4bbf7f3bd.slice - libcontainer container kubepods-besteffort-podadbd1f21_4a0c_4bf4_85d7_11a4bbf7f3bd.slice. Jan 29 12:03:28.731121 systemd[1]: Created slice kubepods-besteffort-podbc6fb3df_feb3_4fa2_8362_572d6f010fd6.slice - libcontainer container kubepods-besteffort-podbc6fb3df_feb3_4fa2_8362_572d6f010fd6.slice. Jan 29 12:03:28.736838 systemd[1]: Created slice kubepods-besteffort-podeb717d4e_afc9_4dca_98ec_64897dda3bda.slice - libcontainer container kubepods-besteffort-podeb717d4e_afc9_4dca_98ec_64897dda3bda.slice. Jan 29 12:03:28.772412 systemd[1]: Created slice kubepods-besteffort-pod1aa7e374_f593_4448_a0b1_28887de63262.slice - libcontainer container kubepods-besteffort-pod1aa7e374_f593_4448_a0b1_28887de63262.slice. Jan 29 12:03:29.017916 containerd[1705]: time="2025-01-29T12:03:29.016483110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9q52,Uid:1aa7e374-f593-4448-a0b1-28887de63262,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:29.313854 containerd[1705]: time="2025-01-29T12:03:29.313718900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vv8px,Uid:af2b2dc1-1a49-441f-a299-db8f0e304159,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:29.318922 containerd[1705]: time="2025-01-29T12:03:29.318537615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-6cwr2,Uid:eb717d4e-afc9-4dca-98ec-64897dda3bda,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:03:29.318922 containerd[1705]: time="2025-01-29T12:03:29.318625217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkrn,Uid:94cd193a-be29-45dd-83ae-68e9d8fc5a60,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:29.318922 containerd[1705]: time="2025-01-29T12:03:29.318882623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-kmblb,Uid:adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:03:29.342169 containerd[1705]: time="2025-01-29T12:03:29.342113977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9dd89bbdf-wbdmw,Uid:bc6fb3df-feb3-4fa2-8362-572d6f010fd6,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:30.123358 containerd[1705]: time="2025-01-29T12:03:30.123230608Z" level=info msg="shim disconnected" id=54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e namespace=k8s.io Jan 29 12:03:30.123358 containerd[1705]: time="2025-01-29T12:03:30.123299210Z" level=warning msg="cleaning up after shim disconnected" id=54674b18fbf8e94cdd859b1cf85d80410a4c2fb91e435334864f5b3e12437d3e namespace=k8s.io Jan 29 12:03:30.123358 containerd[1705]: time="2025-01-29T12:03:30.123313310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:30.525032 containerd[1705]: time="2025-01-29T12:03:30.523651759Z" level=error msg="Failed to destroy network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.525275 containerd[1705]: time="2025-01-29T12:03:30.525231897Z" level=error msg="encountered an error cleaning up failed sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.526047 containerd[1705]: time="2025-01-29T12:03:30.525574105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vv8px,Uid:af2b2dc1-1a49-441f-a299-db8f0e304159,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.526709 kubelet[3185]: E0129 12:03:30.526660 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.527112 kubelet[3185]: E0129 12:03:30.526765 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vv8px" Jan 29 12:03:30.527112 kubelet[3185]: E0129 12:03:30.526799 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vv8px" Jan 29 12:03:30.527112 kubelet[3185]: E0129 12:03:30.526867 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vv8px_kube-system(af2b2dc1-1a49-441f-a299-db8f0e304159)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vv8px_kube-system(af2b2dc1-1a49-441f-a299-db8f0e304159)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vv8px" podUID="af2b2dc1-1a49-441f-a299-db8f0e304159" Jan 29 12:03:30.528855 containerd[1705]: time="2025-01-29T12:03:30.527614954Z" level=error msg="Failed to destroy network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.529481 containerd[1705]: time="2025-01-29T12:03:30.529266193Z" level=error msg="encountered an error cleaning up failed sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.529481 containerd[1705]: time="2025-01-29T12:03:30.529330895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkrn,Uid:94cd193a-be29-45dd-83ae-68e9d8fc5a60,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.529610 kubelet[3185]: E0129 12:03:30.529576 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.529664 kubelet[3185]: E0129 12:03:30.529636 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hpkrn" Jan 29 12:03:30.529719 kubelet[3185]: E0129 12:03:30.529661 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hpkrn" Jan 29 12:03:30.529764 kubelet[3185]: E0129 12:03:30.529710 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hpkrn_kube-system(94cd193a-be29-45dd-83ae-68e9d8fc5a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hpkrn_kube-system(94cd193a-be29-45dd-83ae-68e9d8fc5a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hpkrn" podUID="94cd193a-be29-45dd-83ae-68e9d8fc5a60" Jan 29 12:03:30.547220 containerd[1705]: time="2025-01-29T12:03:30.546869213Z" level=error msg="Failed to destroy network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.548681 containerd[1705]: time="2025-01-29T12:03:30.548605254Z" level=error msg="encountered an error cleaning up failed sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.548892 containerd[1705]: time="2025-01-29T12:03:30.548851760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-6cwr2,Uid:eb717d4e-afc9-4dca-98ec-64897dda3bda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.550485 kubelet[3185]: E0129 12:03:30.549272 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.550485 kubelet[3185]: E0129 12:03:30.549357 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" Jan 29 12:03:30.550485 kubelet[3185]: E0129 12:03:30.549384 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" Jan 29 12:03:30.550685 kubelet[3185]: E0129 12:03:30.549443 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575bfcd495-6cwr2_calico-apiserver(eb717d4e-afc9-4dca-98ec-64897dda3bda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575bfcd495-6cwr2_calico-apiserver(eb717d4e-afc9-4dca-98ec-64897dda3bda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" podUID="eb717d4e-afc9-4dca-98ec-64897dda3bda" Jan 29 12:03:30.553855 containerd[1705]: time="2025-01-29T12:03:30.553796878Z" level=error msg="Failed to destroy network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.554333 containerd[1705]: time="2025-01-29T12:03:30.554303390Z" level=error msg="encountered an error cleaning up failed sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.554503 containerd[1705]: time="2025-01-29T12:03:30.554447994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9q52,Uid:1aa7e374-f593-4448-a0b1-28887de63262,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.554957 kubelet[3185]: E0129 12:03:30.554761 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.554957 kubelet[3185]: E0129 12:03:30.554820 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:30.554957 kubelet[3185]: E0129 12:03:30.554845 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9q52" Jan 29 12:03:30.555142 kubelet[3185]: E0129 12:03:30.554900 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9q52_calico-system(1aa7e374-f593-4448-a0b1-28887de63262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9q52_calico-system(1aa7e374-f593-4448-a0b1-28887de63262)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:30.565549 containerd[1705]: time="2025-01-29T12:03:30.565508258Z" level=error msg="Failed to destroy network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.566088 containerd[1705]: time="2025-01-29T12:03:30.566052771Z" level=error msg="encountered an error cleaning up failed sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.566182 containerd[1705]: time="2025-01-29T12:03:30.566113772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-kmblb,Uid:adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.566602 kubelet[3185]: E0129 12:03:30.566313 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.566602 kubelet[3185]: E0129 12:03:30.566366 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" Jan 29 12:03:30.566602 kubelet[3185]: E0129 12:03:30.566388 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" Jan 29 12:03:30.566753 kubelet[3185]: E0129 12:03:30.566436 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575bfcd495-kmblb_calico-apiserver(adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575bfcd495-kmblb_calico-apiserver(adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" podUID="adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd" Jan 29 12:03:30.571235 containerd[1705]: time="2025-01-29T12:03:30.571199193Z" level=error msg="Failed to destroy network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.571544 containerd[1705]: time="2025-01-29T12:03:30.571512501Z" level=error msg="encountered an error cleaning up failed sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.571626 containerd[1705]: time="2025-01-29T12:03:30.571577702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9dd89bbdf-wbdmw,Uid:bc6fb3df-feb3-4fa2-8362-572d6f010fd6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.571797 kubelet[3185]: E0129 12:03:30.571764 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:30.571888 kubelet[3185]: E0129 12:03:30.571817 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" Jan 29 12:03:30.571888 kubelet[3185]: E0129 12:03:30.571839 3185 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" Jan 29 12:03:30.571981 kubelet[3185]: E0129 12:03:30.571898 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9dd89bbdf-wbdmw_calico-system(bc6fb3df-feb3-4fa2-8362-572d6f010fd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9dd89bbdf-wbdmw_calico-system(bc6fb3df-feb3-4fa2-8362-572d6f010fd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" podUID="bc6fb3df-feb3-4fa2-8362-572d6f010fd6" Jan 29 12:03:30.909763 kubelet[3185]: I0129 12:03:30.909517 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:30.911370 containerd[1705]: time="2025-01-29T12:03:30.910767493Z" level=info msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" Jan 29 12:03:30.911370 containerd[1705]: time="2025-01-29T12:03:30.911017599Z" level=info msg="Ensure that sandbox 658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241 in task-service has been cleanup successfully" Jan 29 12:03:30.912660 kubelet[3185]: I0129 12:03:30.912621 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:30.913431 containerd[1705]: time="2025-01-29T12:03:30.913363055Z" level=info msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" Jan 29 12:03:30.913711 containerd[1705]: time="2025-01-29T12:03:30.913651562Z" level=info msg="Ensure that sandbox f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29 in task-service has been cleanup successfully" Jan 29 12:03:30.916839 kubelet[3185]: I0129 12:03:30.916408 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:30.917285 containerd[1705]: time="2025-01-29T12:03:30.917255748Z" level=info msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" Jan 29 12:03:30.917774 containerd[1705]: time="2025-01-29T12:03:30.917739959Z" level=info msg="Ensure that sandbox 0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548 in task-service has been cleanup successfully" Jan 29 12:03:30.920920 kubelet[3185]: I0129 12:03:30.920036 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:30.921968 containerd[1705]: time="2025-01-29T12:03:30.921122240Z" level=info msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" Jan 29 12:03:30.921968 containerd[1705]: time="2025-01-29T12:03:30.921319345Z" level=info msg="Ensure that sandbox 19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4 in task-service has been cleanup successfully" Jan 29 12:03:30.930135 kubelet[3185]: I0129 12:03:30.929994 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:30.934018 containerd[1705]: time="2025-01-29T12:03:30.933976846Z" level=info msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" Jan 29 12:03:30.935448 containerd[1705]: time="2025-01-29T12:03:30.935348979Z" level=info msg="Ensure that sandbox 08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0 in task-service has been cleanup successfully" Jan 29 12:03:30.936667 kubelet[3185]: I0129 12:03:30.936629 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:30.942764 containerd[1705]: time="2025-01-29T12:03:30.942723055Z" level=info msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" Jan 29 12:03:30.942988 containerd[1705]: time="2025-01-29T12:03:30.942917960Z" level=info msg="Ensure that sandbox c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223 in task-service has been cleanup successfully" Jan 29 12:03:30.963486 containerd[1705]: time="2025-01-29T12:03:30.963228443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:03:31.045616 containerd[1705]: time="2025-01-29T12:03:31.045431497Z" level=error msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" failed" error="failed to destroy network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.045785 kubelet[3185]: E0129 12:03:31.045747 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:31.045964 kubelet[3185]: E0129 12:03:31.045833 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241"} Jan 29 12:03:31.045964 kubelet[3185]: E0129 12:03:31.045918 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94cd193a-be29-45dd-83ae-68e9d8fc5a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.046127 kubelet[3185]: E0129 12:03:31.045957 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94cd193a-be29-45dd-83ae-68e9d8fc5a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hpkrn" podUID="94cd193a-be29-45dd-83ae-68e9d8fc5a60" Jan 29 12:03:31.052533 containerd[1705]: time="2025-01-29T12:03:31.052330661Z" level=error msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" failed" error="failed to destroy network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.052743 kubelet[3185]: E0129 12:03:31.052689 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:31.053049 kubelet[3185]: E0129 12:03:31.052746 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29"} Jan 29 12:03:31.053116 kubelet[3185]: E0129 12:03:31.053050 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc6fb3df-feb3-4fa2-8362-572d6f010fd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.053116 kubelet[3185]: E0129 12:03:31.053101 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc6fb3df-feb3-4fa2-8362-572d6f010fd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" podUID="bc6fb3df-feb3-4fa2-8362-572d6f010fd6" Jan 29 12:03:31.063100 containerd[1705]: time="2025-01-29T12:03:31.062906312Z" level=error msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" failed" error="failed to destroy network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.063445 kubelet[3185]: E0129 12:03:31.063403 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:31.063582 kubelet[3185]: E0129 12:03:31.063458 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0"} Jan 29 12:03:31.063582 kubelet[3185]: E0129 12:03:31.063531 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af2b2dc1-1a49-441f-a299-db8f0e304159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.063582 kubelet[3185]: E0129 12:03:31.063558 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af2b2dc1-1a49-441f-a299-db8f0e304159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vv8px" podUID="af2b2dc1-1a49-441f-a299-db8f0e304159" Jan 29 12:03:31.078622 containerd[1705]: time="2025-01-29T12:03:31.078425281Z" level=error msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" failed" error="failed to destroy network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.078840 kubelet[3185]: E0129 12:03:31.078792 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:31.078936 kubelet[3185]: E0129 12:03:31.078867 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223"} Jan 29 12:03:31.078936 kubelet[3185]: E0129 12:03:31.078927 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1aa7e374-f593-4448-a0b1-28887de63262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.079074 kubelet[3185]: E0129 12:03:31.078964 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1aa7e374-f593-4448-a0b1-28887de63262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9q52" podUID="1aa7e374-f593-4448-a0b1-28887de63262" Jan 29 12:03:31.084434 containerd[1705]: time="2025-01-29T12:03:31.084290121Z" level=error msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" failed" error="failed to destroy network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.084748 kubelet[3185]: E0129 12:03:31.084563 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:31.084748 kubelet[3185]: E0129 12:03:31.084619 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4"} Jan 29 12:03:31.084748 kubelet[3185]: E0129 12:03:31.084662 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb717d4e-afc9-4dca-98ec-64897dda3bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.084748 kubelet[3185]: E0129 12:03:31.084691 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb717d4e-afc9-4dca-98ec-64897dda3bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" podUID="eb717d4e-afc9-4dca-98ec-64897dda3bda" Jan 29 12:03:31.086740 containerd[1705]: time="2025-01-29T12:03:31.086699678Z" level=error msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" failed" error="failed to destroy network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:31.086919 kubelet[3185]: E0129 12:03:31.086882 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:31.087015 kubelet[3185]: E0129 12:03:31.086932 3185 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548"} Jan 29 12:03:31.087015 kubelet[3185]: E0129 12:03:31.086970 3185 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:31.087015 kubelet[3185]: E0129 12:03:31.087002 3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" podUID="adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd" Jan 29 12:03:31.265680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4-shm.mount: Deactivated successfully. Jan 29 12:03:31.265788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0-shm.mount: Deactivated successfully. Jan 29 12:03:31.265871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223-shm.mount: Deactivated successfully. Jan 29 12:03:31.265942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241-shm.mount: Deactivated successfully. Jan 29 12:03:37.209274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466939303.mount: Deactivated successfully. Jan 29 12:03:37.271144 containerd[1705]: time="2025-01-29T12:03:37.271074275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.273093 containerd[1705]: time="2025-01-29T12:03:37.273034122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:03:37.277533 containerd[1705]: time="2025-01-29T12:03:37.277455327Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.282646 containerd[1705]: time="2025-01-29T12:03:37.282559448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.283724 containerd[1705]: time="2025-01-29T12:03:37.283238865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.319761815s" Jan 29 12:03:37.283724 containerd[1705]: time="2025-01-29T12:03:37.283289366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:03:37.302731 containerd[1705]: time="2025-01-29T12:03:37.302678527Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:03:37.353027 containerd[1705]: time="2025-01-29T12:03:37.352968822Z" level=info msg="CreateContainer within sandbox \"15ed577a012950c2dbbd0be563f4060ee948f487c62a43656a1998d9031d88c7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f670f77bec0bd834b8c3cfc15b6efb043d455372ef603835e6344b9124b1df8d\"" Jan 29 12:03:37.353874 containerd[1705]: time="2025-01-29T12:03:37.353786941Z" level=info msg="StartContainer for \"f670f77bec0bd834b8c3cfc15b6efb043d455372ef603835e6344b9124b1df8d\"" Jan 29 12:03:37.388740 systemd[1]: Started cri-containerd-f670f77bec0bd834b8c3cfc15b6efb043d455372ef603835e6344b9124b1df8d.scope - libcontainer container f670f77bec0bd834b8c3cfc15b6efb043d455372ef603835e6344b9124b1df8d. Jan 29 12:03:37.429168 containerd[1705]: time="2025-01-29T12:03:37.428850826Z" level=info msg="StartContainer for \"f670f77bec0bd834b8c3cfc15b6efb043d455372ef603835e6344b9124b1df8d\" returns successfully" Jan 29 12:03:37.707364 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:03:37.707571 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:03:38.061146 kubelet[3185]: I0129 12:03:38.060961 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zwkrj" podStartSLOduration=1.554360287 podStartE2EDuration="21.06093115s" podCreationTimestamp="2025-01-29 12:03:17 +0000 UTC" firstStartedPulling="2025-01-29 12:03:17.777714426 +0000 UTC m=+13.115902509" lastFinishedPulling="2025-01-29 12:03:37.284285189 +0000 UTC m=+32.622473372" observedRunningTime="2025-01-29 12:03:38.060161731 +0000 UTC m=+33.398349914" watchObservedRunningTime="2025-01-29 12:03:38.06093115 +0000 UTC m=+33.399119233" Jan 29 12:03:43.766542 containerd[1705]: time="2025-01-29T12:03:43.766149216Z" level=info msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.818 [INFO][4607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.819 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" iface="eth0" netns="/var/run/netns/cni-9bb2ba98-3b99-018a-6b0d-472682891b97" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.819 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" iface="eth0" netns="/var/run/netns/cni-9bb2ba98-3b99-018a-6b0d-472682891b97" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.820 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" iface="eth0" netns="/var/run/netns/cni-9bb2ba98-3b99-018a-6b0d-472682891b97" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.820 [INFO][4607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.820 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.839 [INFO][4613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.839 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.839 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.845 [WARNING][4613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.846 [INFO][4613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.848 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:43.851687 containerd[1705]: 2025-01-29 12:03:43.850 [INFO][4607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:03:43.852983 containerd[1705]: time="2025-01-29T12:03:43.852401192Z" level=info msg="TearDown network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" successfully" Jan 29 12:03:43.852983 containerd[1705]: time="2025-01-29T12:03:43.852446293Z" level=info msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" returns successfully" Jan 29 12:03:43.854321 containerd[1705]: time="2025-01-29T12:03:43.853822526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9q52,Uid:1aa7e374-f593-4448-a0b1-28887de63262,Namespace:calico-system,Attempt:1,}" Jan 29 12:03:43.856443 systemd[1]: run-netns-cni\x2d9bb2ba98\x2d3b99\x2d018a\x2d6b0d\x2d472682891b97.mount: Deactivated successfully. Jan 29 12:03:44.009677 systemd-networkd[1452]: cali740fbeb7b52: Link UP Jan 29 12:03:44.009933 systemd-networkd[1452]: cali740fbeb7b52: Gained carrier Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.917 [INFO][4620] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.927 [INFO][4620] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0 csi-node-driver- calico-system 1aa7e374-f593-4448-a0b1-28887de63262 767 0 2025-01-29 12:03:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 csi-node-driver-w9q52 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali740fbeb7b52 [] []}} ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.927 [INFO][4620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.956 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" HandleID="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.966 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" HandleID="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"csi-node-driver-w9q52", "timestamp":"2025-01-29 12:03:43.956236692 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.966 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.966 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.966 [INFO][4630] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.968 [INFO][4630] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.971 [INFO][4630] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.975 [INFO][4630] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.976 [INFO][4630] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.978 [INFO][4630] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.978 [INFO][4630] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.979 [INFO][4630] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.983 [INFO][4630] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.995 [INFO][4630] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.129/26] block=192.168.70.128/26 handle="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.995 [INFO][4630] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.129/26] handle="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.998 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:44.031568 containerd[1705]: 2025-01-29 12:03:43.998 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.129/26] IPv6=[] ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" HandleID="k8s-pod-network.4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.001 [INFO][4620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa7e374-f593-4448-a0b1-28887de63262", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"csi-node-driver-w9q52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali740fbeb7b52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.001 [INFO][4620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.129/32] ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.001 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali740fbeb7b52 ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.007 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.008 [INFO][4620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa7e374-f593-4448-a0b1-28887de63262", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba", Pod:"csi-node-driver-w9q52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali740fbeb7b52", MAC:"ce:27:e0:39:e2:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:44.032927 containerd[1705]: 2025-01-29 12:03:44.029 [INFO][4620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba" Namespace="calico-system" Pod="csi-node-driver-w9q52" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:03:44.056833 containerd[1705]: time="2025-01-29T12:03:44.056692310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:44.057073 containerd[1705]: time="2025-01-29T12:03:44.056778412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:44.057073 containerd[1705]: time="2025-01-29T12:03:44.056935616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:44.057935 containerd[1705]: time="2025-01-29T12:03:44.057730135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:44.088671 systemd[1]: Started cri-containerd-4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba.scope - libcontainer container 4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba. Jan 29 12:03:44.111580 containerd[1705]: time="2025-01-29T12:03:44.111518630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9q52,Uid:1aa7e374-f593-4448-a0b1-28887de63262,Namespace:calico-system,Attempt:1,} returns sandbox id \"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba\"" Jan 29 12:03:44.113538 containerd[1705]: time="2025-01-29T12:03:44.113319373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:03:44.767679 containerd[1705]: time="2025-01-29T12:03:44.767593123Z" level=info msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" Jan 29 12:03:44.769083 containerd[1705]: time="2025-01-29T12:03:44.768842053Z" level=info msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" Jan 29 12:03:44.854380 systemd[1]: run-containerd-runc-k8s.io-4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba-runc.SBRqIu.mount: Deactivated successfully. Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.858 [INFO][4736] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.860 [INFO][4736] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" iface="eth0" netns="/var/run/netns/cni-d374651c-ddcc-19c8-6423-3e625213c659" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.861 [INFO][4736] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" iface="eth0" netns="/var/run/netns/cni-d374651c-ddcc-19c8-6423-3e625213c659" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.863 [INFO][4736] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" iface="eth0" netns="/var/run/netns/cni-d374651c-ddcc-19c8-6423-3e625213c659" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.864 [INFO][4736] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.864 [INFO][4736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.897 [INFO][4752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.897 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.897 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.903 [WARNING][4752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.903 [INFO][4752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.905 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:44.908556 containerd[1705]: 2025-01-29 12:03:44.907 [INFO][4736] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:03:44.911209 containerd[1705]: time="2025-01-29T12:03:44.910122254Z" level=info msg="TearDown network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" successfully" Jan 29 12:03:44.911209 containerd[1705]: time="2025-01-29T12:03:44.910599166Z" level=info msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" returns successfully" Jan 29 12:03:44.914219 systemd[1]: run-netns-cni\x2dd374651c\x2dddcc\x2d19c8\x2d6423\x2d3e625213c659.mount: Deactivated successfully. Jan 29 12:03:44.918355 containerd[1705]: time="2025-01-29T12:03:44.916728113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-6cwr2,Uid:eb717d4e-afc9-4dca-98ec-64897dda3bda,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.862 [INFO][4737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.864 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" iface="eth0" netns="/var/run/netns/cni-9bdaa710-0329-76b3-a74d-40c59982754f" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.866 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" iface="eth0" netns="/var/run/netns/cni-9bdaa710-0329-76b3-a74d-40c59982754f" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.867 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" iface="eth0" netns="/var/run/netns/cni-9bdaa710-0329-76b3-a74d-40c59982754f" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.867 [INFO][4737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.867 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.899 [INFO][4756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.899 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.905 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.917 [WARNING][4756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.917 [INFO][4756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.919 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:44.923157 containerd[1705]: 2025-01-29 12:03:44.920 [INFO][4737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:03:44.924020 containerd[1705]: time="2025-01-29T12:03:44.923316672Z" level=info msg="TearDown network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" successfully" Jan 29 12:03:44.924020 containerd[1705]: time="2025-01-29T12:03:44.923344572Z" level=info msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" returns successfully" Jan 29 12:03:44.927142 containerd[1705]: time="2025-01-29T12:03:44.926719654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-kmblb,Uid:adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:03:44.928330 systemd[1]: run-netns-cni\x2d9bdaa710\x2d0329\x2d76b3\x2da74d\x2d40c59982754f.mount: Deactivated successfully. Jan 29 12:03:45.194068 systemd-networkd[1452]: cali1fd7a02a84d: Link UP Jan 29 12:03:45.195449 systemd-networkd[1452]: cali1fd7a02a84d: Gained carrier Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.074 [INFO][4764] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.089 [INFO][4764] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0 calico-apiserver-575bfcd495- calico-apiserver eb717d4e-afc9-4dca-98ec-64897dda3bda 776 0 2025-01-29 12:03:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575bfcd495 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 calico-apiserver-575bfcd495-6cwr2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1fd7a02a84d [] []}} ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.089 [INFO][4764] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.135 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" HandleID="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.156 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" HandleID="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"calico-apiserver-575bfcd495-6cwr2", "timestamp":"2025-01-29 12:03:45.135585082 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.156 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.156 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.156 [INFO][4786] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.161 [INFO][4786] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.164 [INFO][4786] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.169 [INFO][4786] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.171 [INFO][4786] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.173 [INFO][4786] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.173 [INFO][4786] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.175 [INFO][4786] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62 Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.179 [INFO][4786] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4786] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.130/26] block=192.168.70.128/26 handle="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4786] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.130/26] handle="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:45.218766 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.130/26] IPv6=[] ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" HandleID="k8s-pod-network.7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.190 [INFO][4764] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb717d4e-afc9-4dca-98ec-64897dda3bda", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"calico-apiserver-575bfcd495-6cwr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1fd7a02a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.190 [INFO][4764] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.130/32] ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.190 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fd7a02a84d ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.192 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.192 [INFO][4764] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb717d4e-afc9-4dca-98ec-64897dda3bda", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62", Pod:"calico-apiserver-575bfcd495-6cwr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1fd7a02a84d", MAC:"3a:ab:b3:11:d8:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:45.220023 containerd[1705]: 2025-01-29 12:03:45.216 [INFO][4764] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-6cwr2" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:03:45.264602 containerd[1705]: time="2025-01-29T12:03:45.262983748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:45.264602 containerd[1705]: time="2025-01-29T12:03:45.263137152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:45.264602 containerd[1705]: time="2025-01-29T12:03:45.263168553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:45.264602 containerd[1705]: time="2025-01-29T12:03:45.263376258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:45.310704 systemd[1]: Started cri-containerd-7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62.scope - libcontainer container 7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62. Jan 29 12:03:45.315416 systemd-networkd[1452]: cali7778c9d9a75: Link UP Jan 29 12:03:45.316532 systemd-networkd[1452]: cali7778c9d9a75: Gained carrier Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.094 [INFO][4770] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.108 [INFO][4770] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0 calico-apiserver-575bfcd495- calico-apiserver adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd 777 0 2025-01-29 12:03:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575bfcd495 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 calico-apiserver-575bfcd495-kmblb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7778c9d9a75 [] []}} ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.109 [INFO][4770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.150 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" HandleID="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.162 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" HandleID="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"calico-apiserver-575bfcd495-kmblb", "timestamp":"2025-01-29 12:03:45.150819848 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.162 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.188 [INFO][4791] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.260 [INFO][4791] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.273 [INFO][4791] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.279 [INFO][4791] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.283 [INFO][4791] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.287 [INFO][4791] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.288 [INFO][4791] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.291 [INFO][4791] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.298 [INFO][4791] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.310 [INFO][4791] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.131/26] block=192.168.70.128/26 handle="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.310 [INFO][4791] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.131/26] handle="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.310 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:45.339903 containerd[1705]: 2025-01-29 12:03:45.310 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.131/26] IPv6=[] ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" HandleID="k8s-pod-network.903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.312 [INFO][4770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"calico-apiserver-575bfcd495-kmblb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7778c9d9a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.312 [INFO][4770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.131/32] ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.312 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7778c9d9a75 ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.317 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.318 [INFO][4770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f", Pod:"calico-apiserver-575bfcd495-kmblb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7778c9d9a75", MAC:"32:95:55:45:2a:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:45.342153 containerd[1705]: 2025-01-29 12:03:45.336 [INFO][4770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f" Namespace="calico-apiserver" Pod="calico-apiserver-575bfcd495-kmblb" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:03:45.390781 containerd[1705]: time="2025-01-29T12:03:45.390630521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:45.390781 containerd[1705]: time="2025-01-29T12:03:45.390702923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:45.390781 containerd[1705]: time="2025-01-29T12:03:45.390718823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:45.392195 containerd[1705]: time="2025-01-29T12:03:45.391597944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:45.407225 containerd[1705]: time="2025-01-29T12:03:45.407178320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-6cwr2,Uid:eb717d4e-afc9-4dca-98ec-64897dda3bda,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62\"" Jan 29 12:03:45.422678 systemd[1]: Started cri-containerd-903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f.scope - libcontainer container 903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f. Jan 29 12:03:45.486099 containerd[1705]: time="2025-01-29T12:03:45.485828713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575bfcd495-kmblb,Uid:adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f\"" Jan 29 12:03:45.607294 containerd[1705]: time="2025-01-29T12:03:45.607235935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.610478 containerd[1705]: time="2025-01-29T12:03:45.610381911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:03:45.615746 containerd[1705]: time="2025-01-29T12:03:45.615666538Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.620701 containerd[1705]: time="2025-01-29T12:03:45.620633858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.621504 containerd[1705]: time="2025-01-29T12:03:45.621342675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.507982101s" Jan 29 12:03:45.621504 containerd[1705]: time="2025-01-29T12:03:45.621386176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:03:45.623491 containerd[1705]: time="2025-01-29T12:03:45.623345423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:03:45.624547 containerd[1705]: time="2025-01-29T12:03:45.624514751Z" level=info msg="CreateContainer within sandbox \"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:03:45.681726 containerd[1705]: time="2025-01-29T12:03:45.681661127Z" level=info msg="CreateContainer within sandbox \"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a5eb210f03f456a753d65086cfce4a9df3394686fbaf12ead05a94d9e02cb51f\"" Jan 29 12:03:45.684436 containerd[1705]: time="2025-01-29T12:03:45.682752853Z" level=info msg="StartContainer for \"a5eb210f03f456a753d65086cfce4a9df3394686fbaf12ead05a94d9e02cb51f\"" Jan 29 12:03:45.690690 systemd-networkd[1452]: cali740fbeb7b52: Gained IPv6LL Jan 29 12:03:45.726661 systemd[1]: Started cri-containerd-a5eb210f03f456a753d65086cfce4a9df3394686fbaf12ead05a94d9e02cb51f.scope - libcontainer container a5eb210f03f456a753d65086cfce4a9df3394686fbaf12ead05a94d9e02cb51f. Jan 29 12:03:45.766705 containerd[1705]: time="2025-01-29T12:03:45.764755927Z" level=info msg="StartContainer for \"a5eb210f03f456a753d65086cfce4a9df3394686fbaf12ead05a94d9e02cb51f\" returns successfully" Jan 29 12:03:45.769738 containerd[1705]: time="2025-01-29T12:03:45.768864726Z" level=info msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" Jan 29 12:03:45.771042 containerd[1705]: time="2025-01-29T12:03:45.770561367Z" level=info msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" Jan 29 12:03:45.787846 containerd[1705]: time="2025-01-29T12:03:45.787652879Z" level=info msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.953 [INFO][4991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.953 [INFO][4991] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" iface="eth0" netns="/var/run/netns/cni-af36d6a4-7a21-d7a4-b46d-3d38c3daa7f9" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.953 [INFO][4991] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" iface="eth0" netns="/var/run/netns/cni-af36d6a4-7a21-d7a4-b46d-3d38c3daa7f9" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.954 [INFO][4991] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" iface="eth0" netns="/var/run/netns/cni-af36d6a4-7a21-d7a4-b46d-3d38c3daa7f9" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.954 [INFO][4991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:45.954 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.016 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.018 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.018 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.026 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.026 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.028 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.033573 containerd[1705]: 2025-01-29 12:03:46.029 [INFO][4991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:03:46.033573 containerd[1705]: time="2025-01-29T12:03:46.031675853Z" level=info msg="TearDown network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" successfully" Jan 29 12:03:46.033573 containerd[1705]: time="2025-01-29T12:03:46.031711054Z" level=info msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" returns successfully" Jan 29 12:03:46.037367 containerd[1705]: time="2025-01-29T12:03:46.035741451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9dd89bbdf-wbdmw,Uid:bc6fb3df-feb3-4fa2-8362-572d6f010fd6,Namespace:calico-system,Attempt:1,}" Jan 29 12:03:46.038217 systemd[1]: run-netns-cni\x2daf36d6a4\x2d7a21\x2dd7a4\x2db46d\x2d3d38c3daa7f9.mount: Deactivated successfully. Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.959 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.960 [INFO][4993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" iface="eth0" netns="/var/run/netns/cni-1d14ee19-79b1-ae08-e7ec-b0724705bed4" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.961 [INFO][4993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" iface="eth0" netns="/var/run/netns/cni-1d14ee19-79b1-ae08-e7ec-b0724705bed4" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.962 [INFO][4993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" iface="eth0" netns="/var/run/netns/cni-1d14ee19-79b1-ae08-e7ec-b0724705bed4" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.963 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:45.963 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.018 [INFO][5020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.018 [INFO][5020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.028 [INFO][5020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.043 [WARNING][5020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.043 [INFO][5020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.045 [INFO][5020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.048425 containerd[1705]: 2025-01-29 12:03:46.047 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:03:46.052389 containerd[1705]: time="2025-01-29T12:03:46.051586032Z" level=info msg="TearDown network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" successfully" Jan 29 12:03:46.052389 containerd[1705]: time="2025-01-29T12:03:46.051621233Z" level=info msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" returns successfully" Jan 29 12:03:46.054399 containerd[1705]: time="2025-01-29T12:03:46.052808262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vv8px,Uid:af2b2dc1-1a49-441f-a299-db8f0e304159,Namespace:kube-system,Attempt:1,}" Jan 29 12:03:46.053981 systemd[1]: run-netns-cni\x2d1d14ee19\x2d79b1\x2dae08\x2de7ec\x2db0724705bed4.mount: Deactivated successfully. Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.960 [INFO][4992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.962 [INFO][4992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" iface="eth0" netns="/var/run/netns/cni-02695f3d-a230-d5bd-b13d-3f251ad66d36" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.964 [INFO][4992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" iface="eth0" netns="/var/run/netns/cni-02695f3d-a230-d5bd-b13d-3f251ad66d36" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.965 [INFO][4992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" iface="eth0" netns="/var/run/netns/cni-02695f3d-a230-d5bd-b13d-3f251ad66d36" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.965 [INFO][4992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:45.965 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.027 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.027 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.045 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.068 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.068 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.071 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.075770 containerd[1705]: 2025-01-29 12:03:46.074 [INFO][4992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:03:46.077064 containerd[1705]: time="2025-01-29T12:03:46.075946719Z" level=info msg="TearDown network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" successfully" Jan 29 12:03:46.077064 containerd[1705]: time="2025-01-29T12:03:46.075978919Z" level=info msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" returns successfully" Jan 29 12:03:46.077064 containerd[1705]: time="2025-01-29T12:03:46.076803439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkrn,Uid:94cd193a-be29-45dd-83ae-68e9d8fc5a60,Namespace:kube-system,Attempt:1,}" Jan 29 12:03:46.080365 systemd[1]: run-netns-cni\x2d02695f3d\x2da230\x2dd5bd\x2db13d\x2d3f251ad66d36.mount: Deactivated successfully. Jan 29 12:03:46.267989 systemd-networkd[1452]: cali36da9fc5eaa: Link UP Jan 29 12:03:46.273111 systemd-networkd[1452]: cali36da9fc5eaa: Gained carrier Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.125 [INFO][5039] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.137 [INFO][5039] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0 calico-kube-controllers-9dd89bbdf- calico-system bc6fb3df-feb3-4fa2-8362-572d6f010fd6 795 0 2025-01-29 12:03:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9dd89bbdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 calico-kube-controllers-9dd89bbdf-wbdmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali36da9fc5eaa [] []}} ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.137 [INFO][5039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.180 [INFO][5051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" HandleID="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.200 [INFO][5051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" HandleID="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"calico-kube-controllers-9dd89bbdf-wbdmw", "timestamp":"2025-01-29 12:03:46.180754542 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.200 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.200 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.200 [INFO][5051] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.205 [INFO][5051] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.210 [INFO][5051] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.219 [INFO][5051] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.222 [INFO][5051] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.227 [INFO][5051] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.227 [INFO][5051] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.231 [INFO][5051] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96 Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.242 [INFO][5051] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.257 [INFO][5051] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.132/26] block=192.168.70.128/26 handle="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.257 [INFO][5051] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.132/26] handle="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.257 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.306215 containerd[1705]: 2025-01-29 12:03:46.257 [INFO][5051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.132/26] IPv6=[] ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" HandleID="k8s-pod-network.bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.260 [INFO][5039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0", GenerateName:"calico-kube-controllers-9dd89bbdf-", Namespace:"calico-system", SelfLink:"", UID:"bc6fb3df-feb3-4fa2-8362-572d6f010fd6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9dd89bbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"calico-kube-controllers-9dd89bbdf-wbdmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36da9fc5eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.262 [INFO][5039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.132/32] ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.262 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36da9fc5eaa ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.275 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.279 [INFO][5039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0", GenerateName:"calico-kube-controllers-9dd89bbdf-", Namespace:"calico-system", SelfLink:"", UID:"bc6fb3df-feb3-4fa2-8362-572d6f010fd6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9dd89bbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96", Pod:"calico-kube-controllers-9dd89bbdf-wbdmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36da9fc5eaa", MAC:"1a:9b:95:a3:86:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.307222 containerd[1705]: 2025-01-29 12:03:46.303 [INFO][5039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96" Namespace="calico-system" Pod="calico-kube-controllers-9dd89bbdf-wbdmw" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:03:46.361256 containerd[1705]: time="2025-01-29T12:03:46.360784275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:46.361256 containerd[1705]: time="2025-01-29T12:03:46.360868177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:46.361256 containerd[1705]: time="2025-01-29T12:03:46.360889678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.361256 containerd[1705]: time="2025-01-29T12:03:46.360986780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.388978 systemd-networkd[1452]: calia0b26dacfb4: Link UP Jan 29 12:03:46.390058 systemd[1]: Started cri-containerd-bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96.scope - libcontainer container bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96. Jan 29 12:03:46.392836 systemd-networkd[1452]: calia0b26dacfb4: Gained carrier Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.202 [INFO][5056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.220 [INFO][5056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0 coredns-6f6b679f8f- kube-system af2b2dc1-1a49-441f-a299-db8f0e304159 796 0 2025-01-29 12:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 coredns-6f6b679f8f-vv8px eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0b26dacfb4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.220 [INFO][5056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.315 [INFO][5081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" HandleID="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.327 [INFO][5081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" HandleID="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"coredns-6f6b679f8f-vv8px", "timestamp":"2025-01-29 12:03:46.315514186 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.329 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.329 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.329 [INFO][5081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.332 [INFO][5081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.338 [INFO][5081] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.349 [INFO][5081] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.352 [INFO][5081] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.354 [INFO][5081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.354 [INFO][5081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.356 [INFO][5081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.364 [INFO][5081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.375 [INFO][5081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.133/26] block=192.168.70.128/26 handle="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.375 [INFO][5081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.133/26] handle="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.375 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.416381 containerd[1705]: 2025-01-29 12:03:46.375 [INFO][5081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.133/26] IPv6=[] ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" HandleID="k8s-pod-network.8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.379 [INFO][5056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"af2b2dc1-1a49-441f-a299-db8f0e304159", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"coredns-6f6b679f8f-vv8px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0b26dacfb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.379 [INFO][5056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.133/32] ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.379 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0b26dacfb4 ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.392 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.394 [INFO][5056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"af2b2dc1-1a49-441f-a299-db8f0e304159", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd", Pod:"coredns-6f6b679f8f-vv8px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0b26dacfb4", MAC:"92:91:af:f6:39:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.419839 containerd[1705]: 2025-01-29 12:03:46.414 [INFO][5056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-vv8px" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:03:46.475783 containerd[1705]: time="2025-01-29T12:03:46.474835921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:46.475783 containerd[1705]: time="2025-01-29T12:03:46.474918723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:46.475783 containerd[1705]: time="2025-01-29T12:03:46.474947824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.475783 containerd[1705]: time="2025-01-29T12:03:46.475069426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.481408 containerd[1705]: time="2025-01-29T12:03:46.481337077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9dd89bbdf-wbdmw,Uid:bc6fb3df-feb3-4fa2-8362-572d6f010fd6,Namespace:calico-system,Attempt:1,} returns sandbox id \"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96\"" Jan 29 12:03:46.505819 systemd[1]: Started cri-containerd-8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd.scope - libcontainer container 8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd. Jan 29 12:03:46.507900 systemd-networkd[1452]: cali94aff382200: Link UP Jan 29 12:03:46.510667 systemd-networkd[1452]: cali94aff382200: Gained carrier Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.234 [INFO][5067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.260 [INFO][5067] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0 coredns-6f6b679f8f- kube-system 94cd193a-be29-45dd-83ae-68e9d8fc5a60 797 0 2025-01-29 12:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-56ab0c4267 coredns-6f6b679f8f-hpkrn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali94aff382200 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.260 [INFO][5067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.334 [INFO][5089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" HandleID="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.351 [INFO][5089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" HandleID="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc480), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-56ab0c4267", "pod":"coredns-6f6b679f8f-hpkrn", "timestamp":"2025-01-29 12:03:46.334881952 +0000 UTC"}, Hostname:"ci-4081.3.0-a-56ab0c4267", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.351 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.375 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.378 [INFO][5089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-56ab0c4267' Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.434 [INFO][5089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.441 [INFO][5089] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.452 [INFO][5089] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.455 [INFO][5089] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.459 [INFO][5089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.459 [INFO][5089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.462 [INFO][5089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.472 [INFO][5089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.494 [INFO][5089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.134/26] block=192.168.70.128/26 handle="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.494 [INFO][5089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.134/26] handle="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" host="ci-4081.3.0-a-56ab0c4267" Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.494 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:46.539180 containerd[1705]: 2025-01-29 12:03:46.494 [INFO][5089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.134/26] IPv6=[] ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" HandleID="k8s-pod-network.9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.498 [INFO][5067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94cd193a-be29-45dd-83ae-68e9d8fc5a60", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"", Pod:"coredns-6f6b679f8f-hpkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94aff382200", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.498 [INFO][5067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.134/32] ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.498 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94aff382200 ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.511 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.513 [INFO][5067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94cd193a-be29-45dd-83ae-68e9d8fc5a60", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d", Pod:"coredns-6f6b679f8f-hpkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94aff382200", MAC:"ba:a7:d2:aa:f8:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:46.541634 containerd[1705]: 2025-01-29 12:03:46.536 [INFO][5067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d" Namespace="kube-system" Pod="coredns-6f6b679f8f-hpkrn" WorkloadEndpoint="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:03:46.576678 containerd[1705]: time="2025-01-29T12:03:46.576501068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vv8px,Uid:af2b2dc1-1a49-441f-a299-db8f0e304159,Namespace:kube-system,Attempt:1,} returns sandbox id \"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd\"" Jan 29 12:03:46.584633 containerd[1705]: time="2025-01-29T12:03:46.584580763Z" level=info msg="CreateContainer within sandbox \"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:03:46.586740 systemd-networkd[1452]: cali1fd7a02a84d: Gained IPv6LL Jan 29 12:03:46.590700 containerd[1705]: time="2025-01-29T12:03:46.585720290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:46.590700 containerd[1705]: time="2025-01-29T12:03:46.585817992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:46.590700 containerd[1705]: time="2025-01-29T12:03:46.585841193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.590700 containerd[1705]: time="2025-01-29T12:03:46.585983596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:46.614758 systemd[1]: Started cri-containerd-9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d.scope - libcontainer container 9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d. Jan 29 12:03:46.646978 containerd[1705]: time="2025-01-29T12:03:46.646912063Z" level=info msg="CreateContainer within sandbox \"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8511c1cda3648fecf15b51d1e708a7d9876275acf0a146331aada8bb1536928\"" Jan 29 12:03:46.648624 containerd[1705]: time="2025-01-29T12:03:46.648243695Z" level=info msg="StartContainer for \"f8511c1cda3648fecf15b51d1e708a7d9876275acf0a146331aada8bb1536928\"" Jan 29 12:03:46.672983 containerd[1705]: time="2025-01-29T12:03:46.672460778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkrn,Uid:94cd193a-be29-45dd-83ae-68e9d8fc5a60,Namespace:kube-system,Attempt:1,} returns sandbox id \"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d\"" Jan 29 12:03:46.682491 containerd[1705]: time="2025-01-29T12:03:46.682424818Z" level=info msg="CreateContainer within sandbox \"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:03:46.688708 systemd[1]: Started cri-containerd-f8511c1cda3648fecf15b51d1e708a7d9876275acf0a146331aada8bb1536928.scope - libcontainer container f8511c1cda3648fecf15b51d1e708a7d9876275acf0a146331aada8bb1536928. Jan 29 12:03:46.723783 containerd[1705]: time="2025-01-29T12:03:46.723736113Z" level=info msg="StartContainer for \"f8511c1cda3648fecf15b51d1e708a7d9876275acf0a146331aada8bb1536928\" returns successfully" Jan 29 12:03:46.729130 containerd[1705]: time="2025-01-29T12:03:46.729049340Z" level=info msg="CreateContainer within sandbox \"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b8dd7733495c352824af0ce597e285eca576729cd9103dd321a60aacf31dce\"" Jan 29 12:03:46.730113 containerd[1705]: time="2025-01-29T12:03:46.729970263Z" level=info msg="StartContainer for \"d0b8dd7733495c352824af0ce597e285eca576729cd9103dd321a60aacf31dce\"" Jan 29 12:03:46.784825 systemd[1]: Started cri-containerd-d0b8dd7733495c352824af0ce597e285eca576729cd9103dd321a60aacf31dce.scope - libcontainer container d0b8dd7733495c352824af0ce597e285eca576729cd9103dd321a60aacf31dce. Jan 29 12:03:46.846762 containerd[1705]: time="2025-01-29T12:03:46.846626471Z" level=info msg="StartContainer for \"d0b8dd7733495c352824af0ce597e285eca576729cd9103dd321a60aacf31dce\" returns successfully" Jan 29 12:03:46.862102 kubelet[3185]: I0129 12:03:46.862069 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:47.106113 kubelet[3185]: I0129 12:03:47.105899 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vv8px" podStartSLOduration=38.105869045 podStartE2EDuration="38.105869045s" podCreationTimestamp="2025-01-29 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:47.072855067 +0000 UTC m=+42.411043250" watchObservedRunningTime="2025-01-29 12:03:47.105869045 +0000 UTC m=+42.444057628" Jan 29 12:03:47.166340 systemd-networkd[1452]: cali7778c9d9a75: Gained IPv6LL Jan 29 12:03:47.810495 kernel: bpftool[5383]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:03:47.995670 systemd-networkd[1452]: cali94aff382200: Gained IPv6LL Jan 29 12:03:48.058674 systemd-networkd[1452]: cali36da9fc5eaa: Gained IPv6LL Jan 29 12:03:48.123801 systemd-networkd[1452]: calia0b26dacfb4: Gained IPv6LL Jan 29 12:03:48.558802 systemd-networkd[1452]: vxlan.calico: Link UP Jan 29 12:03:48.558813 systemd-networkd[1452]: vxlan.calico: Gained carrier Jan 29 12:03:48.934013 containerd[1705]: time="2025-01-29T12:03:48.933935721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:48.937394 containerd[1705]: time="2025-01-29T12:03:48.937102895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:03:48.940369 containerd[1705]: time="2025-01-29T12:03:48.940325871Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:48.949867 containerd[1705]: time="2025-01-29T12:03:48.949761394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:48.951743 containerd[1705]: time="2025-01-29T12:03:48.951569936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.328176212s" Jan 29 12:03:48.951743 containerd[1705]: time="2025-01-29T12:03:48.951615037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:03:48.954708 containerd[1705]: time="2025-01-29T12:03:48.954341001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:03:48.954900 containerd[1705]: time="2025-01-29T12:03:48.954838713Z" level=info msg="CreateContainer within sandbox \"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:03:49.002691 containerd[1705]: time="2025-01-29T12:03:49.002641040Z" level=info msg="CreateContainer within sandbox \"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f24be954c89fe511dbf5e0f30a81698b563e4db18f865dbfa590bca57a20ad7\"" Jan 29 12:03:49.004922 containerd[1705]: time="2025-01-29T12:03:49.003404458Z" level=info msg="StartContainer for \"9f24be954c89fe511dbf5e0f30a81698b563e4db18f865dbfa590bca57a20ad7\"" Jan 29 12:03:49.046800 systemd[1]: Started cri-containerd-9f24be954c89fe511dbf5e0f30a81698b563e4db18f865dbfa590bca57a20ad7.scope - libcontainer container 9f24be954c89fe511dbf5e0f30a81698b563e4db18f865dbfa590bca57a20ad7. Jan 29 12:03:49.112723 containerd[1705]: time="2025-01-29T12:03:49.112670632Z" level=info msg="StartContainer for \"9f24be954c89fe511dbf5e0f30a81698b563e4db18f865dbfa590bca57a20ad7\" returns successfully" Jan 29 12:03:49.283584 containerd[1705]: time="2025-01-29T12:03:49.283395955Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:49.288385 containerd[1705]: time="2025-01-29T12:03:49.287812659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:03:49.290379 containerd[1705]: time="2025-01-29T12:03:49.290332719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 335.950416ms" Jan 29 12:03:49.290510 containerd[1705]: time="2025-01-29T12:03:49.290385920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:03:49.292371 containerd[1705]: time="2025-01-29T12:03:49.292339766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:03:49.293204 containerd[1705]: time="2025-01-29T12:03:49.293177186Z" level=info msg="CreateContainer within sandbox \"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:03:49.342759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542719737.mount: Deactivated successfully. Jan 29 12:03:49.345710 containerd[1705]: time="2025-01-29T12:03:49.345197411Z" level=info msg="CreateContainer within sandbox \"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab8f137ec107873c391a8895a490d8fc93bfb5afeb0a515e3604f4b96894bb83\"" Jan 29 12:03:49.346418 containerd[1705]: time="2025-01-29T12:03:49.346317438Z" level=info msg="StartContainer for \"ab8f137ec107873c391a8895a490d8fc93bfb5afeb0a515e3604f4b96894bb83\"" Jan 29 12:03:49.400704 systemd[1]: Started cri-containerd-ab8f137ec107873c391a8895a490d8fc93bfb5afeb0a515e3604f4b96894bb83.scope - libcontainer container ab8f137ec107873c391a8895a490d8fc93bfb5afeb0a515e3604f4b96894bb83. Jan 29 12:03:49.542928 containerd[1705]: time="2025-01-29T12:03:49.542098751Z" level=info msg="StartContainer for \"ab8f137ec107873c391a8895a490d8fc93bfb5afeb0a515e3604f4b96894bb83\" returns successfully" Jan 29 12:03:50.075683 kubelet[3185]: I0129 12:03:50.074886 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hpkrn" podStartSLOduration=41.074863005 podStartE2EDuration="41.074863005s" podCreationTimestamp="2025-01-29 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:47.129962312 +0000 UTC m=+42.468150395" watchObservedRunningTime="2025-01-29 12:03:50.074863005 +0000 UTC m=+45.413051188" Jan 29 12:03:50.091158 kubelet[3185]: I0129 12:03:50.091089 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575bfcd495-6cwr2" podStartSLOduration=30.547800198 podStartE2EDuration="34.091064487s" podCreationTimestamp="2025-01-29 12:03:16 +0000 UTC" firstStartedPulling="2025-01-29 12:03:45.40970858 +0000 UTC m=+40.747896663" lastFinishedPulling="2025-01-29 12:03:48.952972869 +0000 UTC m=+44.291160952" observedRunningTime="2025-01-29 12:03:50.089656854 +0000 UTC m=+45.427844937" watchObservedRunningTime="2025-01-29 12:03:50.091064487 +0000 UTC m=+45.429252670" Jan 29 12:03:50.092197 kubelet[3185]: I0129 12:03:50.091458 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575bfcd495-kmblb" podStartSLOduration=30.289037842 podStartE2EDuration="34.091442696s" podCreationTimestamp="2025-01-29 12:03:16 +0000 UTC" firstStartedPulling="2025-01-29 12:03:45.489095592 +0000 UTC m=+40.827283675" lastFinishedPulling="2025-01-29 12:03:49.291500346 +0000 UTC m=+44.629688529" observedRunningTime="2025-01-29 12:03:50.075736226 +0000 UTC m=+45.413924309" watchObservedRunningTime="2025-01-29 12:03:50.091442696 +0000 UTC m=+45.429630779" Jan 29 12:03:50.556374 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Jan 29 12:03:50.897931 containerd[1705]: time="2025-01-29T12:03:50.897862398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.902019 containerd[1705]: time="2025-01-29T12:03:50.901731489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:03:50.905765 containerd[1705]: time="2025-01-29T12:03:50.905682482Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.910608 containerd[1705]: time="2025-01-29T12:03:50.910526096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.911374 containerd[1705]: time="2025-01-29T12:03:50.911152211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.618770244s" Jan 29 12:03:50.911374 containerd[1705]: time="2025-01-29T12:03:50.911201512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:03:50.912895 containerd[1705]: time="2025-01-29T12:03:50.912665747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:03:50.914134 containerd[1705]: time="2025-01-29T12:03:50.913966078Z" level=info msg="CreateContainer within sandbox \"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:03:50.964962 containerd[1705]: time="2025-01-29T12:03:50.964898578Z" level=info msg="CreateContainer within sandbox \"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f\"" Jan 29 12:03:50.965833 containerd[1705]: time="2025-01-29T12:03:50.965744498Z" level=info msg="StartContainer for \"7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f\"" Jan 29 12:03:51.008569 systemd[1]: run-containerd-runc-k8s.io-7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f-runc.tVOrSQ.mount: Deactivated successfully. Jan 29 12:03:51.016619 systemd[1]: Started cri-containerd-7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f.scope - libcontainer container 7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f. Jan 29 12:03:51.050637 containerd[1705]: time="2025-01-29T12:03:51.050587097Z" level=info msg="StartContainer for \"7bd031833d6371f532be38e315ba92fad31dc8a45327086c870d14e21a438f7f\" returns successfully" Jan 29 12:03:51.067672 kubelet[3185]: I0129 12:03:51.067621 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:51.083693 kubelet[3185]: I0129 12:03:51.082211 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w9q52" podStartSLOduration=27.282735365 podStartE2EDuration="34.082187541s" podCreationTimestamp="2025-01-29 12:03:17 +0000 UTC" firstStartedPulling="2025-01-29 12:03:44.112970465 +0000 UTC m=+39.451158548" lastFinishedPulling="2025-01-29 12:03:50.912422541 +0000 UTC m=+46.250610724" observedRunningTime="2025-01-29 12:03:51.081506525 +0000 UTC m=+46.419694708" watchObservedRunningTime="2025-01-29 12:03:51.082187541 +0000 UTC m=+46.420375624" Jan 29 12:03:51.876021 kubelet[3185]: I0129 12:03:51.875961 3185 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:03:51.876021 kubelet[3185]: I0129 12:03:51.876006 3185 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:03:52.967521 containerd[1705]: time="2025-01-29T12:03:52.967441265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.969919 containerd[1705]: time="2025-01-29T12:03:52.969812021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:03:52.974852 containerd[1705]: time="2025-01-29T12:03:52.974782038Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.979989 containerd[1705]: time="2025-01-29T12:03:52.979893459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.981260 containerd[1705]: time="2025-01-29T12:03:52.980794080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.068089531s" Jan 29 12:03:52.981260 containerd[1705]: time="2025-01-29T12:03:52.980838881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:03:52.991051 containerd[1705]: time="2025-01-29T12:03:52.990994920Z" level=info msg="CreateContainer within sandbox \"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:03:53.041105 containerd[1705]: time="2025-01-29T12:03:53.041036399Z" level=info msg="CreateContainer within sandbox \"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786\"" Jan 29 12:03:53.041957 containerd[1705]: time="2025-01-29T12:03:53.041775317Z" level=info msg="StartContainer for \"9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786\"" Jan 29 12:03:53.077627 systemd[1]: Started cri-containerd-9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786.scope - libcontainer container 9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786. Jan 29 12:03:53.123866 containerd[1705]: time="2025-01-29T12:03:53.123814950Z" level=info msg="StartContainer for \"9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786\" returns successfully" Jan 29 12:03:54.095938 kubelet[3185]: I0129 12:03:54.095850 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9dd89bbdf-wbdmw" podStartSLOduration=30.597216273 podStartE2EDuration="37.095822054s" podCreationTimestamp="2025-01-29 12:03:17 +0000 UTC" firstStartedPulling="2025-01-29 12:03:46.483190922 +0000 UTC m=+41.821379105" lastFinishedPulling="2025-01-29 12:03:52.981796803 +0000 UTC m=+48.319984886" observedRunningTime="2025-01-29 12:03:54.094427221 +0000 UTC m=+49.432615404" watchObservedRunningTime="2025-01-29 12:03:54.095822054 +0000 UTC m=+49.434010537" Jan 29 12:03:55.084718 kubelet[3185]: I0129 12:03:55.084664 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:56.940050 kubelet[3185]: I0129 12:03:56.939408 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:59.493829 kubelet[3185]: I0129 12:03:59.493500 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:04.786651 containerd[1705]: time="2025-01-29T12:04:04.786598954Z" level=info msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.830 [WARNING][5722] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94cd193a-be29-45dd-83ae-68e9d8fc5a60", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d", Pod:"coredns-6f6b679f8f-hpkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94aff382200", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.830 [INFO][5722] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.830 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" iface="eth0" netns="" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.830 [INFO][5722] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.830 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.876 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.876 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.876 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.886 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.886 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.888 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:04.892152 containerd[1705]: 2025-01-29 12:04:04.890 [INFO][5722] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:04.892152 containerd[1705]: time="2025-01-29T12:04:04.891650322Z" level=info msg="TearDown network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" successfully" Jan 29 12:04:04.892152 containerd[1705]: time="2025-01-29T12:04:04.891687723Z" level=info msg="StopPodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" returns successfully" Jan 29 12:04:04.894867 containerd[1705]: time="2025-01-29T12:04:04.893839567Z" level=info msg="RemovePodSandbox for \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" Jan 29 12:04:04.894867 containerd[1705]: time="2025-01-29T12:04:04.893897869Z" level=info msg="Forcibly stopping sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\"" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:04.965 [WARNING][5746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94cd193a-be29-45dd-83ae-68e9d8fc5a60", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"9c25537604c6a42ee31f3a9e56540640b6932887b067af535ecd062b4bca8c2d", Pod:"coredns-6f6b679f8f-hpkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94aff382200", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:04.966 [INFO][5746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:04.966 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" iface="eth0" netns="" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:04.966 [INFO][5746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:04.966 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.051 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.054 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.055 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.066 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.066 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" HandleID="k8s-pod-network.658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--hpkrn-eth0" Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.068 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:05.073876 containerd[1705]: 2025-01-29 12:04:05.069 [INFO][5746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241" Jan 29 12:04:05.076081 containerd[1705]: time="2025-01-29T12:04:05.073863283Z" level=info msg="TearDown network for sandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" successfully" Jan 29 12:04:06.055841 containerd[1705]: time="2025-01-29T12:04:06.055761351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.056409 containerd[1705]: time="2025-01-29T12:04:06.055882454Z" level=info msg="RemovePodSandbox \"658cb418a11bca1234838ae129b4611b03841bed5a79dea580dc9f1a47709241\" returns successfully" Jan 29 12:04:06.056833 containerd[1705]: time="2025-01-29T12:04:06.056781072Z" level=info msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.102 [WARNING][5772] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"af2b2dc1-1a49-441f-a299-db8f0e304159", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd", Pod:"coredns-6f6b679f8f-vv8px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0b26dacfb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.103 [INFO][5772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.103 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" iface="eth0" netns="" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.103 [INFO][5772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.103 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.132 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.132 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.132 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.137 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.137 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.138 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.140341 containerd[1705]: 2025-01-29 12:04:06.139 [INFO][5772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.141265 containerd[1705]: time="2025-01-29T12:04:06.140393898Z" level=info msg="TearDown network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" successfully" Jan 29 12:04:06.141265 containerd[1705]: time="2025-01-29T12:04:06.140425799Z" level=info msg="StopPodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" returns successfully" Jan 29 12:04:06.141265 containerd[1705]: time="2025-01-29T12:04:06.141169114Z" level=info msg="RemovePodSandbox for \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" Jan 29 12:04:06.141265 containerd[1705]: time="2025-01-29T12:04:06.141204815Z" level=info msg="Forcibly stopping sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\"" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.177 [WARNING][5796] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"af2b2dc1-1a49-441f-a299-db8f0e304159", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"8def089dd73329c7d7870a1427ce8504f6a8c6e38245b6977aa0e58c91a523bd", Pod:"coredns-6f6b679f8f-vv8px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0b26dacfb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.177 [INFO][5796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.177 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" iface="eth0" netns="" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.177 [INFO][5796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.177 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.198 [INFO][5803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.198 [INFO][5803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.198 [INFO][5803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.205 [WARNING][5803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.205 [INFO][5803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" HandleID="k8s-pod-network.08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Workload="ci--4081.3.0--a--56ab0c4267-k8s-coredns--6f6b679f8f--vv8px-eth0" Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.207 [INFO][5803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.209309 containerd[1705]: 2025-01-29 12:04:06.208 [INFO][5796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0" Jan 29 12:04:06.210103 containerd[1705]: time="2025-01-29T12:04:06.209321721Z" level=info msg="TearDown network for sandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" successfully" Jan 29 12:04:06.217602 containerd[1705]: time="2025-01-29T12:04:06.217548291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.217748 containerd[1705]: time="2025-01-29T12:04:06.217632693Z" level=info msg="RemovePodSandbox \"08af20d51682016015060908f293677deb51c4370dfba014c51b447d12021cf0\" returns successfully" Jan 29 12:04:06.218240 containerd[1705]: time="2025-01-29T12:04:06.218208605Z" level=info msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.259 [WARNING][5821] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0", GenerateName:"calico-kube-controllers-9dd89bbdf-", Namespace:"calico-system", SelfLink:"", UID:"bc6fb3df-feb3-4fa2-8362-572d6f010fd6", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9dd89bbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96", Pod:"calico-kube-controllers-9dd89bbdf-wbdmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36da9fc5eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.260 [INFO][5821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.260 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" iface="eth0" netns="" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.260 [INFO][5821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.260 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.283 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.283 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.283 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.288 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.288 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.290 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.292229 containerd[1705]: 2025-01-29 12:04:06.291 [INFO][5821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.292229 containerd[1705]: time="2025-01-29T12:04:06.292200232Z" level=info msg="TearDown network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" successfully" Jan 29 12:04:06.292229 containerd[1705]: time="2025-01-29T12:04:06.292238633Z" level=info msg="StopPodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" returns successfully" Jan 29 12:04:06.293433 containerd[1705]: time="2025-01-29T12:04:06.292945347Z" level=info msg="RemovePodSandbox for \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" Jan 29 12:04:06.293433 containerd[1705]: time="2025-01-29T12:04:06.292983248Z" level=info msg="Forcibly stopping sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\"" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.325 [WARNING][5846] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0", GenerateName:"calico-kube-controllers-9dd89bbdf-", Namespace:"calico-system", SelfLink:"", UID:"bc6fb3df-feb3-4fa2-8362-572d6f010fd6", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9dd89bbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"bafa5b3b37f68783f4f8809efb2dd2fc798d52508eeb86ee841e97ff0981af96", Pod:"calico-kube-controllers-9dd89bbdf-wbdmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36da9fc5eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.325 [INFO][5846] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.325 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" iface="eth0" netns="" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.325 [INFO][5846] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.325 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.358 [INFO][5852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.359 [INFO][5852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.359 [INFO][5852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.368 [WARNING][5852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.368 [INFO][5852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" HandleID="k8s-pod-network.f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--kube--controllers--9dd89bbdf--wbdmw-eth0" Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.370 [INFO][5852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.373193 containerd[1705]: 2025-01-29 12:04:06.371 [INFO][5846] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29" Jan 29 12:04:06.373893 containerd[1705]: time="2025-01-29T12:04:06.373197804Z" level=info msg="TearDown network for sandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" successfully" Jan 29 12:04:06.383199 containerd[1705]: time="2025-01-29T12:04:06.383140509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.383350 containerd[1705]: time="2025-01-29T12:04:06.383219111Z" level=info msg="RemovePodSandbox \"f9af1dfa2b297f2ed282bd5e715c324aab72f5d96079f311e3dde0d637499f29\" returns successfully" Jan 29 12:04:06.383836 containerd[1705]: time="2025-01-29T12:04:06.383806423Z" level=info msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.421 [WARNING][5870] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa7e374-f593-4448-a0b1-28887de63262", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba", Pod:"csi-node-driver-w9q52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali740fbeb7b52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.421 [INFO][5870] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.421 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" iface="eth0" netns="" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.421 [INFO][5870] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.421 [INFO][5870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.443 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.443 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.443 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.449 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.449 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.451 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.453806 containerd[1705]: 2025-01-29 12:04:06.452 [INFO][5870] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.454641 containerd[1705]: time="2025-01-29T12:04:06.453869469Z" level=info msg="TearDown network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" successfully" Jan 29 12:04:06.454641 containerd[1705]: time="2025-01-29T12:04:06.453903370Z" level=info msg="StopPodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" returns successfully" Jan 29 12:04:06.454782 containerd[1705]: time="2025-01-29T12:04:06.454700286Z" level=info msg="RemovePodSandbox for \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" Jan 29 12:04:06.454782 containerd[1705]: time="2025-01-29T12:04:06.454738587Z" level=info msg="Forcibly stopping sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\"" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.489 [WARNING][5895] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa7e374-f593-4448-a0b1-28887de63262", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"4dd6fb9ed369936c550b7b5f4db6660084db023b868ee9b4ede37ffb460affba", Pod:"csi-node-driver-w9q52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali740fbeb7b52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.490 [INFO][5895] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.490 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" iface="eth0" netns="" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.490 [INFO][5895] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.490 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.508 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.508 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.508 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.515 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.515 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" HandleID="k8s-pod-network.c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Workload="ci--4081.3.0--a--56ab0c4267-k8s-csi--node--driver--w9q52-eth0" Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.517 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.519082 containerd[1705]: 2025-01-29 12:04:06.518 [INFO][5895] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223" Jan 29 12:04:06.519856 containerd[1705]: time="2025-01-29T12:04:06.519136416Z" level=info msg="TearDown network for sandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" successfully" Jan 29 12:04:06.529348 containerd[1705]: time="2025-01-29T12:04:06.528999120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.529348 containerd[1705]: time="2025-01-29T12:04:06.529192824Z" level=info msg="RemovePodSandbox \"c049e1b1f745d14444e0e1f6acb0a70c55ce4a989f03234a04bc79824c309223\" returns successfully" Jan 29 12:04:06.529856 containerd[1705]: time="2025-01-29T12:04:06.529824537Z" level=info msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.568 [WARNING][5919] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb717d4e-afc9-4dca-98ec-64897dda3bda", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62", Pod:"calico-apiserver-575bfcd495-6cwr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1fd7a02a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.569 [INFO][5919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.569 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" iface="eth0" netns="" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.569 [INFO][5919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.569 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.593 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.593 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.593 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.598 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.599 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.601 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.603240 containerd[1705]: 2025-01-29 12:04:06.602 [INFO][5919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.604278 containerd[1705]: time="2025-01-29T12:04:06.603337054Z" level=info msg="TearDown network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" successfully" Jan 29 12:04:06.604278 containerd[1705]: time="2025-01-29T12:04:06.603375355Z" level=info msg="StopPodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" returns successfully" Jan 29 12:04:06.604278 containerd[1705]: time="2025-01-29T12:04:06.604130571Z" level=info msg="RemovePodSandbox for \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" Jan 29 12:04:06.604278 containerd[1705]: time="2025-01-29T12:04:06.604167071Z" level=info msg="Forcibly stopping sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\"" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.639 [WARNING][5943] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb717d4e-afc9-4dca-98ec-64897dda3bda", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"7ce27eab6cd6967676e6f5a18373257577d03545df0854663e3a6c86e82a0f62", Pod:"calico-apiserver-575bfcd495-6cwr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1fd7a02a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.639 [INFO][5943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.639 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" iface="eth0" netns="" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.639 [INFO][5943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.639 [INFO][5943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.665 [INFO][5949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.665 [INFO][5949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.665 [INFO][5949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.672 [WARNING][5949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.672 [INFO][5949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" HandleID="k8s-pod-network.19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--6cwr2-eth0" Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.674 [INFO][5949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.677234 containerd[1705]: 2025-01-29 12:04:06.675 [INFO][5943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4" Jan 29 12:04:06.677234 containerd[1705]: time="2025-01-29T12:04:06.676897573Z" level=info msg="TearDown network for sandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" successfully" Jan 29 12:04:06.689501 containerd[1705]: time="2025-01-29T12:04:06.689366130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.690059 containerd[1705]: time="2025-01-29T12:04:06.689460432Z" level=info msg="RemovePodSandbox \"19c9d116b486ac7284eee38f37734c5b48b0083580b7dd5885a4677295a177b4\" returns successfully" Jan 29 12:04:06.690375 containerd[1705]: time="2025-01-29T12:04:06.690349450Z" level=info msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.744 [WARNING][5967] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f", Pod:"calico-apiserver-575bfcd495-kmblb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7778c9d9a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.744 [INFO][5967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.744 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" iface="eth0" netns="" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.744 [INFO][5967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.744 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.781 [INFO][5974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.781 [INFO][5974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.781 [INFO][5974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.788 [WARNING][5974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.788 [INFO][5974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.790 [INFO][5974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.794092 containerd[1705]: 2025-01-29 12:04:06.792 [INFO][5967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.795129 containerd[1705]: time="2025-01-29T12:04:06.794559301Z" level=info msg="TearDown network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" successfully" Jan 29 12:04:06.795129 containerd[1705]: time="2025-01-29T12:04:06.794610802Z" level=info msg="StopPodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" returns successfully" Jan 29 12:04:06.796566 containerd[1705]: time="2025-01-29T12:04:06.796039132Z" level=info msg="RemovePodSandbox for \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" Jan 29 12:04:06.796566 containerd[1705]: time="2025-01-29T12:04:06.796078233Z" level=info msg="Forcibly stopping sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\"" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.839 [WARNING][5992] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0", GenerateName:"calico-apiserver-575bfcd495-", Namespace:"calico-apiserver", SelfLink:"", UID:"adbd1f21-4a0c-4bf4-85d7-11a4bbf7f3bd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575bfcd495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-56ab0c4267", ContainerID:"903656e3ce88c0fec12437c46715b17f83f5f700d15b1f79bde37a01f5e8fa8f", Pod:"calico-apiserver-575bfcd495-kmblb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7778c9d9a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.839 [INFO][5992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.840 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" iface="eth0" netns="" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.840 [INFO][5992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.840 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.860 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.860 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.860 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.865 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.866 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" HandleID="k8s-pod-network.0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Workload="ci--4081.3.0--a--56ab0c4267-k8s-calico--apiserver--575bfcd495--kmblb-eth0" Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.867 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:06.869367 containerd[1705]: 2025-01-29 12:04:06.868 [INFO][5992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548" Jan 29 12:04:06.870050 containerd[1705]: time="2025-01-29T12:04:06.869422947Z" level=info msg="TearDown network for sandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" successfully" Jan 29 12:04:06.879241 containerd[1705]: time="2025-01-29T12:04:06.879187048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:06.879386 containerd[1705]: time="2025-01-29T12:04:06.879277450Z" level=info msg="RemovePodSandbox \"0aafd7f54f5d83883a32afff818138461130023d802948c8ce06e50404e62548\" returns successfully" Jan 29 12:04:26.968583 systemd[1]: run-containerd-runc-k8s.io-9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786-runc.4Oracb.mount: Deactivated successfully. Jan 29 12:04:37.905803 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.16.10:56356.service - OpenSSH per-connection server daemon (10.200.16.10:56356). Jan 29 12:04:38.550851 sshd[6087]: Accepted publickey for core from 10.200.16.10 port 56356 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:38.552615 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:38.557659 systemd-logind[1684]: New session 10 of user core. Jan 29 12:04:38.563633 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:04:39.086033 sshd[6087]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:39.089631 systemd[1]: sshd@7-10.200.8.17:22-10.200.16.10:56356.service: Deactivated successfully. Jan 29 12:04:39.092178 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:04:39.093917 systemd-logind[1684]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:04:39.095081 systemd-logind[1684]: Removed session 10. Jan 29 12:04:44.205799 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.16.10:56368.service - OpenSSH per-connection server daemon (10.200.16.10:56368). Jan 29 12:04:44.854762 sshd[6104]: Accepted publickey for core from 10.200.16.10 port 56368 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:44.856330 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:44.860631 systemd-logind[1684]: New session 11 of user core. Jan 29 12:04:44.865630 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:04:45.381167 sshd[6104]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:45.385266 systemd[1]: sshd@8-10.200.8.17:22-10.200.16.10:56368.service: Deactivated successfully. Jan 29 12:04:45.387656 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:04:45.389004 systemd-logind[1684]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:04:45.390179 systemd-logind[1684]: Removed session 11. Jan 29 12:04:50.500805 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.16.10:38406.service - OpenSSH per-connection server daemon (10.200.16.10:38406). Jan 29 12:04:51.153500 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 38406 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:51.155324 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:51.160151 systemd-logind[1684]: New session 12 of user core. Jan 29 12:04:51.169693 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:04:51.674502 sshd[6118]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:51.678368 systemd[1]: sshd@9-10.200.8.17:22-10.200.16.10:38406.service: Deactivated successfully. Jan 29 12:04:51.681125 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:04:51.683052 systemd-logind[1684]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:04:51.684421 systemd-logind[1684]: Removed session 12. Jan 29 12:04:56.791805 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.16.10:34986.service - OpenSSH per-connection server daemon (10.200.16.10:34986). Jan 29 12:04:57.441837 sshd[6155]: Accepted publickey for core from 10.200.16.10 port 34986 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:04:57.443351 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:57.448161 systemd-logind[1684]: New session 13 of user core. Jan 29 12:04:57.450673 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:04:57.961704 sshd[6155]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:57.966015 systemd[1]: sshd@10-10.200.8.17:22-10.200.16.10:34986.service: Deactivated successfully. Jan 29 12:04:57.968639 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:04:57.969460 systemd-logind[1684]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:04:57.970647 systemd-logind[1684]: Removed session 13. Jan 29 12:05:03.082834 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.16.10:34996.service - OpenSSH per-connection server daemon (10.200.16.10:34996). Jan 29 12:05:03.735401 sshd[6188]: Accepted publickey for core from 10.200.16.10 port 34996 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:03.736391 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:03.741457 systemd-logind[1684]: New session 14 of user core. Jan 29 12:05:03.748649 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:05:04.254783 sshd[6188]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:04.259965 systemd[1]: sshd@11-10.200.8.17:22-10.200.16.10:34996.service: Deactivated successfully. Jan 29 12:05:04.263763 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:05:04.264967 systemd-logind[1684]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:05:04.266070 systemd-logind[1684]: Removed session 14. Jan 29 12:05:04.374829 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.16.10:35000.service - OpenSSH per-connection server daemon (10.200.16.10:35000). Jan 29 12:05:05.018022 sshd[6202]: Accepted publickey for core from 10.200.16.10 port 35000 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:05.019702 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:05.025668 systemd-logind[1684]: New session 15 of user core. Jan 29 12:05:05.031666 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:05:05.577215 sshd[6202]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:05.580660 systemd[1]: sshd@12-10.200.8.17:22-10.200.16.10:35000.service: Deactivated successfully. Jan 29 12:05:05.583277 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:05:05.585289 systemd-logind[1684]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:05:05.586419 systemd-logind[1684]: Removed session 15. Jan 29 12:05:05.696799 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.16.10:35012.service - OpenSSH per-connection server daemon (10.200.16.10:35012). Jan 29 12:05:06.342007 sshd[6214]: Accepted publickey for core from 10.200.16.10 port 35012 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:06.343709 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:06.348809 systemd-logind[1684]: New session 16 of user core. Jan 29 12:05:06.357730 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:05:06.869411 sshd[6214]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:06.873156 systemd[1]: sshd@13-10.200.8.17:22-10.200.16.10:35012.service: Deactivated successfully. Jan 29 12:05:06.875857 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:05:06.878124 systemd-logind[1684]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:05:06.879412 systemd-logind[1684]: Removed session 16. Jan 29 12:05:11.988800 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.16.10:58814.service - OpenSSH per-connection server daemon (10.200.16.10:58814). Jan 29 12:05:12.637315 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 58814 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:12.638948 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:12.643926 systemd-logind[1684]: New session 17 of user core. Jan 29 12:05:12.648651 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:05:13.156225 sshd[6236]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:13.161202 systemd[1]: sshd@14-10.200.8.17:22-10.200.16.10:58814.service: Deactivated successfully. Jan 29 12:05:13.163436 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:05:13.164720 systemd-logind[1684]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:05:13.165809 systemd-logind[1684]: Removed session 17. Jan 29 12:05:18.276789 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.16.10:47074.service - OpenSSH per-connection server daemon (10.200.16.10:47074). Jan 29 12:05:18.926854 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 47074 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:18.928441 sshd[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:18.932729 systemd-logind[1684]: New session 18 of user core. Jan 29 12:05:18.940654 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:05:19.454074 sshd[6253]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:19.458375 systemd-logind[1684]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:05:19.459925 systemd[1]: sshd@15-10.200.8.17:22-10.200.16.10:47074.service: Deactivated successfully. Jan 29 12:05:19.462371 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:05:19.464212 systemd-logind[1684]: Removed session 18. Jan 29 12:05:24.580858 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.16.10:47080.service - OpenSSH per-connection server daemon (10.200.16.10:47080). Jan 29 12:05:25.226229 sshd[6318]: Accepted publickey for core from 10.200.16.10 port 47080 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:25.227950 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:25.233576 systemd-logind[1684]: New session 19 of user core. Jan 29 12:05:25.237703 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:05:25.746105 sshd[6318]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:25.751730 systemd[1]: sshd@16-10.200.8.17:22-10.200.16.10:47080.service: Deactivated successfully. Jan 29 12:05:25.755743 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:05:25.756579 systemd-logind[1684]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:05:25.757601 systemd-logind[1684]: Removed session 19. Jan 29 12:05:30.869199 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.16.10:42416.service - OpenSSH per-connection server daemon (10.200.16.10:42416). Jan 29 12:05:31.522610 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 42416 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:31.524196 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:31.528400 systemd-logind[1684]: New session 20 of user core. Jan 29 12:05:31.532652 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:05:32.046558 sshd[6354]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:32.051455 systemd[1]: sshd@17-10.200.8.17:22-10.200.16.10:42416.service: Deactivated successfully. Jan 29 12:05:32.053937 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:05:32.054812 systemd-logind[1684]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:05:32.056022 systemd-logind[1684]: Removed session 20. Jan 29 12:05:32.165823 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.16.10:42420.service - OpenSSH per-connection server daemon (10.200.16.10:42420). Jan 29 12:05:32.811508 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 42420 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:32.812489 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:32.817296 systemd-logind[1684]: New session 21 of user core. Jan 29 12:05:32.826672 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:05:33.390936 sshd[6367]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:33.394837 systemd[1]: sshd@18-10.200.8.17:22-10.200.16.10:42420.service: Deactivated successfully. Jan 29 12:05:33.397425 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:05:33.399926 systemd-logind[1684]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:05:33.400985 systemd-logind[1684]: Removed session 21. Jan 29 12:05:33.513798 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.16.10:42424.service - OpenSSH per-connection server daemon (10.200.16.10:42424). Jan 29 12:05:34.163597 sshd[6377]: Accepted publickey for core from 10.200.16.10 port 42424 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:34.165232 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:34.170541 systemd-logind[1684]: New session 22 of user core. Jan 29 12:05:34.177659 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:05:36.663806 sshd[6377]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:36.668280 systemd-logind[1684]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:05:36.668837 systemd[1]: sshd@19-10.200.8.17:22-10.200.16.10:42424.service: Deactivated successfully. Jan 29 12:05:36.671658 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:05:36.672714 systemd-logind[1684]: Removed session 22. Jan 29 12:05:36.782841 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.16.10:49016.service - OpenSSH per-connection server daemon (10.200.16.10:49016). Jan 29 12:05:37.435232 sshd[6395]: Accepted publickey for core from 10.200.16.10 port 49016 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:37.437447 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:37.442719 systemd-logind[1684]: New session 23 of user core. Jan 29 12:05:37.450681 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:05:38.076743 sshd[6395]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:38.081187 systemd-logind[1684]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:05:38.081793 systemd[1]: sshd@20-10.200.8.17:22-10.200.16.10:49016.service: Deactivated successfully. Jan 29 12:05:38.084391 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:05:38.085834 systemd-logind[1684]: Removed session 23. Jan 29 12:05:38.199854 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.16.10:49024.service - OpenSSH per-connection server daemon (10.200.16.10:49024). Jan 29 12:05:38.846677 sshd[6405]: Accepted publickey for core from 10.200.16.10 port 49024 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:38.848320 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:38.853214 systemd-logind[1684]: New session 24 of user core. Jan 29 12:05:38.857648 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:05:39.364436 sshd[6405]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:39.367705 systemd[1]: sshd@21-10.200.8.17:22-10.200.16.10:49024.service: Deactivated successfully. Jan 29 12:05:39.370227 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:05:39.372304 systemd-logind[1684]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:05:39.373698 systemd-logind[1684]: Removed session 24. Jan 29 12:05:44.485433 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.16.10:49030.service - OpenSSH per-connection server daemon (10.200.16.10:49030). Jan 29 12:05:45.141828 sshd[6419]: Accepted publickey for core from 10.200.16.10 port 49030 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:45.143785 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:45.148947 systemd-logind[1684]: New session 25 of user core. Jan 29 12:05:45.154706 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:05:45.658526 sshd[6419]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:45.662891 systemd[1]: sshd@22-10.200.8.17:22-10.200.16.10:49030.service: Deactivated successfully. Jan 29 12:05:45.665447 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:05:45.666825 systemd-logind[1684]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:05:45.667868 systemd-logind[1684]: Removed session 25. Jan 29 12:05:50.788137 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.16.10:38372.service - OpenSSH per-connection server daemon (10.200.16.10:38372). Jan 29 12:05:51.442103 sshd[6435]: Accepted publickey for core from 10.200.16.10 port 38372 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:51.443785 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:51.448561 systemd-logind[1684]: New session 26 of user core. Jan 29 12:05:51.452653 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:05:51.959458 sshd[6435]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:51.963141 systemd[1]: sshd@23-10.200.8.17:22-10.200.16.10:38372.service: Deactivated successfully. Jan 29 12:05:51.966151 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:05:51.968689 systemd-logind[1684]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:05:51.970158 systemd-logind[1684]: Removed session 26. Jan 29 12:05:57.085792 systemd[1]: Started sshd@24-10.200.8.17:22-10.200.16.10:53780.service - OpenSSH per-connection server daemon (10.200.16.10:53780). Jan 29 12:05:57.733600 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 53780 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:57.735237 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:57.740264 systemd-logind[1684]: New session 27 of user core. Jan 29 12:05:57.747669 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:05:58.253357 sshd[6490]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:58.256516 systemd[1]: sshd@24-10.200.8.17:22-10.200.16.10:53780.service: Deactivated successfully. Jan 29 12:05:58.258836 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:05:58.261124 systemd-logind[1684]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:05:58.262154 systemd-logind[1684]: Removed session 27. Jan 29 12:06:03.372789 systemd[1]: Started sshd@25-10.200.8.17:22-10.200.16.10:53782.service - OpenSSH per-connection server daemon (10.200.16.10:53782). Jan 29 12:06:04.021062 sshd[6504]: Accepted publickey for core from 10.200.16.10 port 53782 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:04.022716 sshd[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:04.028225 systemd-logind[1684]: New session 28 of user core. Jan 29 12:06:04.038658 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:06:04.546400 sshd[6504]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:04.550104 systemd[1]: sshd@25-10.200.8.17:22-10.200.16.10:53782.service: Deactivated successfully. Jan 29 12:06:04.552463 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:06:04.554322 systemd-logind[1684]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:06:04.555421 systemd-logind[1684]: Removed session 28. Jan 29 12:06:09.665792 systemd[1]: Started sshd@26-10.200.8.17:22-10.200.16.10:60320.service - OpenSSH per-connection server daemon (10.200.16.10:60320). Jan 29 12:06:10.313996 sshd[6521]: Accepted publickey for core from 10.200.16.10 port 60320 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:10.315703 sshd[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:10.324721 systemd-logind[1684]: New session 29 of user core. Jan 29 12:06:10.330656 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 12:06:10.833121 sshd[6521]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:10.836311 systemd[1]: sshd@26-10.200.8.17:22-10.200.16.10:60320.service: Deactivated successfully. Jan 29 12:06:10.838898 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 12:06:10.840576 systemd-logind[1684]: Session 29 logged out. Waiting for processes to exit. Jan 29 12:06:10.842252 systemd-logind[1684]: Removed session 29. Jan 29 12:06:15.948222 systemd[1]: Started sshd@27-10.200.8.17:22-10.200.16.10:44234.service - OpenSSH per-connection server daemon (10.200.16.10:44234). Jan 29 12:06:16.600500 sshd[6536]: Accepted publickey for core from 10.200.16.10 port 44234 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:16.602239 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:16.607333 systemd-logind[1684]: New session 30 of user core. Jan 29 12:06:16.610660 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 12:06:17.125178 sshd[6536]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:17.129224 systemd[1]: sshd@27-10.200.8.17:22-10.200.16.10:44234.service: Deactivated successfully. Jan 29 12:06:17.131740 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 12:06:17.132567 systemd-logind[1684]: Session 30 logged out. Waiting for processes to exit. Jan 29 12:06:17.133914 systemd-logind[1684]: Removed session 30. Jan 29 12:06:22.243792 systemd[1]: Started sshd@28-10.200.8.17:22-10.200.16.10:44242.service - OpenSSH per-connection server daemon (10.200.16.10:44242). Jan 29 12:06:22.366250 systemd[1]: run-containerd-runc-k8s.io-9b4cf6d4c82bbce8bdbf321371f5a4d5c7cad0b61265d8bd7277bf1f6a200786-runc.2B8a8c.mount: Deactivated successfully. Jan 29 12:06:22.898675 sshd[6549]: Accepted publickey for core from 10.200.16.10 port 44242 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:22.900397 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:22.904648 systemd-logind[1684]: New session 31 of user core. Jan 29 12:06:22.911647 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 12:06:23.417644 sshd[6549]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:23.421429 systemd[1]: sshd@28-10.200.8.17:22-10.200.16.10:44242.service: Deactivated successfully. Jan 29 12:06:23.424556 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 12:06:23.425385 systemd-logind[1684]: Session 31 logged out. Waiting for processes to exit. Jan 29 12:06:23.426612 systemd-logind[1684]: Removed session 31. Jan 29 12:06:28.540860 systemd[1]: Started sshd@29-10.200.8.17:22-10.200.16.10:43232.service - OpenSSH per-connection server daemon (10.200.16.10:43232). Jan 29 12:06:29.197594 sshd[6624]: Accepted publickey for core from 10.200.16.10 port 43232 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:29.199619 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:29.209546 systemd-logind[1684]: New session 32 of user core. Jan 29 12:06:29.212999 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 29 12:06:29.719052 sshd[6624]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:29.727423 systemd-logind[1684]: Session 32 logged out. Waiting for processes to exit. Jan 29 12:06:29.728443 systemd[1]: sshd@29-10.200.8.17:22-10.200.16.10:43232.service: Deactivated successfully. Jan 29 12:06:29.731087 systemd[1]: session-32.scope: Deactivated successfully. Jan 29 12:06:29.738975 systemd-logind[1684]: Removed session 32. Jan 29 12:06:34.840874 systemd[1]: Started sshd@30-10.200.8.17:22-10.200.16.10:43234.service - OpenSSH per-connection server daemon (10.200.16.10:43234). Jan 29 12:06:35.487731 sshd[6645]: Accepted publickey for core from 10.200.16.10 port 43234 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:35.489341 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:35.494593 systemd-logind[1684]: New session 33 of user core. Jan 29 12:06:35.500748 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 29 12:06:36.002640 sshd[6645]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:36.005846 systemd[1]: sshd@30-10.200.8.17:22-10.200.16.10:43234.service: Deactivated successfully. Jan 29 12:06:36.008247 systemd[1]: session-33.scope: Deactivated successfully. Jan 29 12:06:36.010408 systemd-logind[1684]: Session 33 logged out. Waiting for processes to exit. Jan 29 12:06:36.011680 systemd-logind[1684]: Removed session 33.