Feb 13 20:46:41.039386 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:46:41.039422 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.039502 kernel: BIOS-provided physical RAM map: Feb 13 20:46:41.039515 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 20:46:41.039525 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 20:46:41.039537 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 20:46:41.039551 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Feb 13 20:46:41.039588 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Feb 13 20:46:41.039601 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 20:46:41.039613 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 20:46:41.039626 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 20:46:41.039638 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 20:46:41.039650 kernel: printk: bootconsole [earlyser0] enabled Feb 13 20:46:41.039661 kernel: NX (Execute Disable) protection: active Feb 13 20:46:41.039680 kernel: APIC: Static calls initialized Feb 13 20:46:41.039694 kernel: efi: EFI v2.7 by Microsoft Feb 13 20:46:41.039708 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Feb 13 20:46:41.039721 kernel: SMBIOS 3.1.0 present. Feb 13 20:46:41.039735 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 20:46:41.039749 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 20:46:41.039763 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 20:46:41.039776 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 20:46:41.039790 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 20:46:41.039803 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 20:46:41.039818 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 20:46:41.039830 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:46:41.039842 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:46:41.039856 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 20:46:41.039868 kernel: tsc: Detected 2593.907 MHz processor Feb 13 20:46:41.039881 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:46:41.039893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:46:41.039904 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 20:46:41.039916 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 20:46:41.039931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:46:41.039943 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 20:46:41.039955 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 20:46:41.039967 kernel: Using GB pages for direct mapping Feb 13 20:46:41.039980 kernel: Secure boot disabled Feb 13 20:46:41.039992 kernel: ACPI: Early table checksum verification disabled Feb 13 20:46:41.040005 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 20:46:41.040024 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040040 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040053 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 20:46:41.040065 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 20:46:41.040079 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040094 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040108 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040126 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040140 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040154 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040168 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040182 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 20:46:41.040196 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 20:46:41.040211 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 20:46:41.040225 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 20:46:41.040242 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 20:46:41.040256 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 20:46:41.040270 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 20:46:41.040284 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 20:46:41.040298 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 20:46:41.040313 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 20:46:41.040327 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:46:41.040341 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:46:41.040356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 20:46:41.040373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 20:46:41.040387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 20:46:41.040401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 20:46:41.040415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 20:46:41.040428 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 20:46:41.040442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 20:46:41.040457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 20:46:41.040471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 20:46:41.040485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 20:46:41.040502 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 20:46:41.040515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 20:46:41.040530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 20:46:41.040544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 20:46:41.040558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 20:46:41.040636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 20:46:41.040651 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 20:46:41.040666 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 13 20:46:41.040680 kernel: Zone ranges: Feb 13 20:46:41.040699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:46:41.040713 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:46:41.040727 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:46:41.040741 kernel: Movable zone start for each node Feb 13 20:46:41.040755 kernel: Early memory node ranges Feb 13 20:46:41.040769 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 20:46:41.040783 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 20:46:41.040797 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 20:46:41.040811 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:46:41.040829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 20:46:41.040843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:46:41.040856 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 20:46:41.040870 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 20:46:41.040884 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 20:46:41.040899 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 20:46:41.040914 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:46:41.040928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:46:41.040942 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:46:41.040959 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 20:46:41.040973 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:46:41.040987 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 20:46:41.041001 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 20:46:41.041015 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:46:41.041030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:46:41.041043 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:46:41.041057 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:46:41.041071 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:46:41.041088 kernel: Hyper-V: PV spinlocks enabled Feb 13 20:46:41.041102 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:46:41.041118 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.041132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:46:41.041145 kernel: random: crng init done Feb 13 20:46:41.041159 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:46:41.041173 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:46:41.041187 kernel: Fallback order for Node 0: 0 Feb 13 20:46:41.041205 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 20:46:41.041230 kernel: Policy zone: Normal Feb 13 20:46:41.041245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:46:41.041262 kernel: software IO TLB: area num 2. Feb 13 20:46:41.041278 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 310128K reserved, 0K cma-reserved) Feb 13 20:46:41.041293 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:46:41.041308 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:46:41.041322 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:46:41.041337 kernel: Dynamic Preempt: voluntary Feb 13 20:46:41.041352 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:46:41.041368 kernel: rcu: RCU event tracing is enabled. Feb 13 20:46:41.041387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:46:41.041402 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:46:41.041417 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:46:41.041431 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:46:41.041446 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:46:41.041463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:46:41.041478 kernel: Using NULL legacy PIC Feb 13 20:46:41.041493 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 20:46:41.041508 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:46:41.041523 kernel: Console: colour dummy device 80x25 Feb 13 20:46:41.041537 kernel: printk: console [tty1] enabled Feb 13 20:46:41.041552 kernel: printk: console [ttyS0] enabled Feb 13 20:46:41.041604 kernel: printk: bootconsole [earlyser0] disabled Feb 13 20:46:41.041621 kernel: ACPI: Core revision 20230628 Feb 13 20:46:41.041637 kernel: Failed to register legacy timer interrupt Feb 13 20:46:41.041657 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:46:41.041673 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:46:41.041688 kernel: Hyper-V: Using IPI hypercalls Feb 13 20:46:41.041704 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 20:46:41.041720 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 20:46:41.041736 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 20:46:41.041751 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 20:46:41.041767 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 20:46:41.041782 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 20:46:41.041800 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 20:46:41.041816 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:46:41.041831 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:46:41.041846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:46:41.041861 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:46:41.041877 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:46:41.041891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:46:41.041906 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:46:41.041921 kernel: RETBleed: Vulnerable Feb 13 20:46:41.041937 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:46:41.041951 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:46:41.041965 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:46:41.041979 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:46:41.041993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:46:41.042006 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:46:41.042020 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:46:41.042034 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:46:41.042047 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:46:41.042061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:46:41.042075 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 20:46:41.042092 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 20:46:41.042106 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 20:46:41.042119 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 20:46:41.042133 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:46:41.042147 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:46:41.042161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:46:41.042176 kernel: landlock: Up and running. Feb 13 20:46:41.042191 kernel: SELinux: Initializing. Feb 13 20:46:41.042207 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.042221 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.042235 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:46:41.042250 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042268 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042282 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042297 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:46:41.042311 kernel: signal: max sigframe size: 3632 Feb 13 20:46:41.042326 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:46:41.042341 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:46:41.042355 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:46:41.042369 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:46:41.042383 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:46:41.042400 kernel: .... node #0, CPUs: #1 Feb 13 20:46:41.042415 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 20:46:41.042431 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:46:41.042445 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:46:41.042460 kernel: smpboot: Max logical packages: 1 Feb 13 20:46:41.042474 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 20:46:41.042488 kernel: devtmpfs: initialized Feb 13 20:46:41.042503 kernel: x86/mm: Memory block size: 128MB Feb 13 20:46:41.042520 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 20:46:41.042535 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:46:41.042550 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:46:41.042564 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:46:41.042599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:46:41.042614 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:46:41.042629 kernel: audit: type=2000 audit(1739479600.027:1): state=initialized audit_enabled=0 res=1 Feb 13 20:46:41.042644 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:46:41.042658 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:46:41.042676 kernel: cpuidle: using governor menu Feb 13 20:46:41.042691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:46:41.042705 kernel: dca service started, version 1.12.1 Feb 13 20:46:41.042720 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 20:46:41.042735 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:46:41.042750 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:46:41.042764 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:46:41.042779 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:46:41.042793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:46:41.042810 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:46:41.042825 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:46:41.042840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:46:41.042855 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:46:41.042869 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:46:41.042884 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:46:41.042899 kernel: ACPI: Interpreter enabled Feb 13 20:46:41.042914 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:46:41.042929 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:46:41.042946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:46:41.042961 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:46:41.042976 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 20:46:41.042991 kernel: iommu: Default domain type: Translated Feb 13 20:46:41.043006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:46:41.043021 kernel: efivars: Registered efivars operations Feb 13 20:46:41.043034 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:46:41.043047 kernel: PCI: System does not support PCI Feb 13 20:46:41.043062 kernel: vgaarb: loaded Feb 13 20:46:41.043081 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 20:46:41.043096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:46:41.043112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:46:41.043128 kernel: pnp: PnP ACPI init Feb 13 20:46:41.043144 kernel: pnp: PnP ACPI: found 3 devices Feb 13 20:46:41.043160 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:46:41.043175 kernel: NET: Registered PF_INET protocol family Feb 13 20:46:41.043191 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:46:41.043207 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:46:41.043225 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:46:41.043240 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:46:41.043254 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:46:41.043269 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:46:41.043283 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.043298 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.043313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:46:41.043327 kernel: NET: Registered PF_XDP protocol family Feb 13 20:46:41.043341 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:46:41.043359 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:46:41.043373 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Feb 13 20:46:41.043387 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:46:41.043402 kernel: Initialise system trusted keyrings Feb 13 20:46:41.043416 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:46:41.043430 kernel: Key type asymmetric registered Feb 13 20:46:41.043444 kernel: Asymmetric key parser 'x509' registered Feb 13 20:46:41.043459 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:46:41.043473 kernel: io scheduler mq-deadline registered Feb 13 20:46:41.043490 kernel: io scheduler kyber registered Feb 13 20:46:41.043504 kernel: io scheduler bfq registered Feb 13 20:46:41.043520 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:46:41.043534 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:46:41.043549 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:46:41.043564 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:46:41.043590 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:46:41.047080 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 20:46:41.047219 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T20:46:40 UTC (1739479600) Feb 13 20:46:41.047335 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 20:46:41.047356 kernel: intel_pstate: CPU model not supported Feb 13 20:46:41.047370 kernel: efifb: probing for efifb Feb 13 20:46:41.047386 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:46:41.047402 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:46:41.047417 kernel: efifb: scrolling: redraw Feb 13 20:46:41.047432 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:46:41.047451 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:46:41.047466 kernel: fb0: EFI VGA frame buffer device Feb 13 20:46:41.047481 kernel: pstore: Using crash dump compression: deflate Feb 13 20:46:41.047496 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:46:41.047510 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:46:41.047524 kernel: Segment Routing with IPv6 Feb 13 20:46:41.047539 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:46:41.047554 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:46:41.047584 kernel: Key type dns_resolver registered Feb 13 20:46:41.047604 kernel: IPI shorthand broadcast: enabled Feb 13 20:46:41.047619 kernel: sched_clock: Marking stable (788002300, 42280900)->(1028633900, -198350700) Feb 13 20:46:41.047634 kernel: registered taskstats version 1 Feb 13 20:46:41.047648 kernel: Loading compiled-in X.509 certificates Feb 13 20:46:41.047663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:46:41.047677 kernel: Key type .fscrypt registered Feb 13 20:46:41.047691 kernel: Key type fscrypt-provisioning registered Feb 13 20:46:41.047705 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:46:41.047721 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:46:41.047739 kernel: ima: No architecture policies found Feb 13 20:46:41.047755 kernel: clk: Disabling unused clocks Feb 13 20:46:41.047769 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:46:41.047784 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:46:41.047800 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:46:41.047814 kernel: Run /init as init process Feb 13 20:46:41.047828 kernel: with arguments: Feb 13 20:46:41.047843 kernel: /init Feb 13 20:46:41.047857 kernel: with environment: Feb 13 20:46:41.047872 kernel: HOME=/ Feb 13 20:46:41.047892 kernel: TERM=linux Feb 13 20:46:41.047907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:46:41.047925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:46:41.047946 systemd[1]: Detected virtualization microsoft. Feb 13 20:46:41.047963 systemd[1]: Detected architecture x86-64. Feb 13 20:46:41.047980 systemd[1]: Running in initrd. Feb 13 20:46:41.047997 systemd[1]: No hostname configured, using default hostname. Feb 13 20:46:41.048015 systemd[1]: Hostname set to . Feb 13 20:46:41.048032 systemd[1]: Initializing machine ID from random generator. Feb 13 20:46:41.048048 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:46:41.048064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:46:41.048082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:46:41.048100 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:46:41.048118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:46:41.048135 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:46:41.048154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:46:41.048171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:46:41.048186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:46:41.048201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:46:41.048216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:46:41.048230 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:46:41.048245 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:46:41.048264 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:46:41.048280 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:46:41.048296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:46:41.048312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:46:41.048327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:46:41.048343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:46:41.048359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:46:41.048376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:46:41.048394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:46:41.048410 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:46:41.048426 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:46:41.048442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:46:41.048458 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:46:41.048474 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:46:41.048490 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:46:41.048506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:46:41.048544 systemd-journald[176]: Collecting audit messages is disabled. Feb 13 20:46:41.048604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:41.048620 systemd-journald[176]: Journal started Feb 13 20:46:41.048656 systemd-journald[176]: Runtime Journal (/run/log/journal/8f5e71bcde2045beb057eb63a1bf42b0) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:46:41.060088 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:46:41.062626 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:46:41.067974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:46:41.071360 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:46:41.078071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:41.082652 systemd-modules-load[177]: Inserted module 'overlay' Feb 13 20:46:41.093833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:41.107961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:46:41.117446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:46:41.119674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:46:41.122387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:46:41.148657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:46:41.156851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:46:41.156916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:46:41.163886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:41.172204 systemd-modules-load[177]: Inserted module 'br_netfilter' Feb 13 20:46:41.174340 kernel: Bridge firewalling registered Feb 13 20:46:41.178770 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:46:41.183872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:46:41.190765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:46:41.197278 dracut-cmdline[207]: dracut-dracut-053 Feb 13 20:46:41.200099 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.229088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:46:41.242170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:46:41.280742 systemd-resolved[262]: Positive Trust Anchors: Feb 13 20:46:41.280759 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:46:41.280814 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:46:41.308304 kernel: SCSI subsystem initialized Feb 13 20:46:41.291541 systemd-resolved[262]: Defaulting to hostname 'linux'. Feb 13 20:46:41.308033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:46:41.320610 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:46:41.317716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:46:41.330589 kernel: iscsi: registered transport (tcp) Feb 13 20:46:41.352493 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:46:41.352556 kernel: QLogic iSCSI HBA Driver Feb 13 20:46:41.387504 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:46:41.395722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:46:41.423003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:46:41.423078 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:46:41.426220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:46:41.465596 kernel: raid6: avx512x4 gen() 18512 MB/s Feb 13 20:46:41.484587 kernel: raid6: avx512x2 gen() 18728 MB/s Feb 13 20:46:41.503584 kernel: raid6: avx512x1 gen() 18647 MB/s Feb 13 20:46:41.521584 kernel: raid6: avx2x4 gen() 18694 MB/s Feb 13 20:46:41.540583 kernel: raid6: avx2x2 gen() 18514 MB/s Feb 13 20:46:41.560151 kernel: raid6: avx2x1 gen() 13888 MB/s Feb 13 20:46:41.560197 kernel: raid6: using algorithm avx512x2 gen() 18728 MB/s Feb 13 20:46:41.581438 kernel: raid6: .... xor() 29873 MB/s, rmw enabled Feb 13 20:46:41.581471 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:46:41.603590 kernel: xor: automatically using best checksumming function avx Feb 13 20:46:41.754599 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:46:41.764144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:46:41.774710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:46:41.791192 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 20:46:41.795623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:46:41.809719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:46:41.823976 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:46:41.849849 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:46:41.860867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:46:41.899286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:46:41.915297 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:46:41.948485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:46:41.955236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:46:41.961914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:46:41.967806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:46:41.978651 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:46:41.981858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:46:42.006586 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 20:46:42.019457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:46:42.028613 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:46:42.028649 kernel: AES CTR mode by8 optimization enabled Feb 13 20:46:42.043718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:46:42.044925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:42.050886 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:42.064000 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:46:42.051343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:42.088701 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:46:42.088728 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 20:46:42.088755 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:46:42.088774 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:46:42.074560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.079411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.099719 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:46:42.099746 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:46:42.104178 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 20:46:42.104609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.121559 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:46:42.121735 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:46:42.117818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:42.136314 kernel: PTP clock support registered Feb 13 20:46:42.136347 kernel: scsi host0: storvsc_host_t Feb 13 20:46:42.136555 kernel: scsi host1: storvsc_host_t Feb 13 20:46:42.136611 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:46:42.117902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.139439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.153668 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:46:42.153897 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:46:42.153952 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:46:42.155663 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:46:42.159285 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:46:42.159313 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:46:42.840719 systemd-resolved[262]: Clock change detected. Flushing caches. Feb 13 20:46:42.862489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.874556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:42.890882 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:46:42.893556 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:46:42.893581 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:46:42.905601 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:46:42.922755 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:46:42.922984 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:46:42.923160 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:46:42.923339 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:46:42.923495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:42.923514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:46:42.907267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:43.005674 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: VF slot 1 added Feb 13 20:46:43.016122 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:46:43.016170 kernel: hv_pci 1ad5b0c9-04f5-4263-b0bd-1d4ab754703f: PCI VMBus probing: Using version 0x10004 Feb 13 20:46:43.058777 kernel: hv_pci 1ad5b0c9-04f5-4263-b0bd-1d4ab754703f: PCI host bridge to bus 04f5:00 Feb 13 20:46:43.058970 kernel: pci_bus 04f5:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 20:46:43.059159 kernel: pci_bus 04f5:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:46:43.059339 kernel: pci 04f5:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 20:46:43.059537 kernel: pci 04f5:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:46:43.059705 kernel: pci 04f5:00:02.0: enabling Extended Tags Feb 13 20:46:43.059874 kernel: pci 04f5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 04f5:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 20:46:43.060045 kernel: pci_bus 04f5:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:46:43.060751 kernel: pci 04f5:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:46:43.229310 kernel: mlx5_core 04f5:00:02.0: enabling device (0000 -> 0002) Feb 13 20:46:43.452570 kernel: mlx5_core 04f5:00:02.0: firmware version: 14.30.5000 Feb 13 20:46:43.453042 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: VF registering: eth1 Feb 13 20:46:43.453243 kernel: mlx5_core 04f5:00:02.0 eth1: joined to eth0 Feb 13 20:46:43.453555 kernel: mlx5_core 04f5:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:46:43.459196 kernel: mlx5_core 04f5:00:02.0 enP1269s1: renamed from eth1 Feb 13 20:46:43.851898 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:46:43.973955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:46:44.020215 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (440) Feb 13 20:46:44.034782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:46:44.062204 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (443) Feb 13 20:46:44.075951 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:46:44.080421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:46:44.093343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:46:44.108124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:44.113228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:45.120463 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:45.120528 disk-uuid[602]: The operation has completed successfully. Feb 13 20:46:45.197022 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:46:45.197132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:46:45.213318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:46:45.219351 sh[688]: Success Feb 13 20:46:45.266671 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:46:45.589344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:46:45.603296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:46:45.605928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:46:45.628637 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:46:45.628693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:45.632194 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:46:45.634935 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:46:45.637441 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:46:46.229001 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:46:46.232088 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:46:46.243420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:46:46.248347 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:46:46.269468 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:46.269515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:46.269538 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:46.295415 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:46.305668 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:46:46.312199 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:46.318517 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:46:46.329389 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:46:46.341770 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:46:46.349383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:46:46.370134 systemd-networkd[872]: lo: Link UP Feb 13 20:46:46.370143 systemd-networkd[872]: lo: Gained carrier Feb 13 20:46:46.372159 systemd-networkd[872]: Enumeration completed Feb 13 20:46:46.372262 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:46:46.373054 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:46:46.373057 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:46:46.375536 systemd[1]: Reached target network.target - Network. Feb 13 20:46:46.435225 kernel: mlx5_core 04f5:00:02.0 enP1269s1: Link up Feb 13 20:46:46.469213 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: Data path switched to VF: enP1269s1 Feb 13 20:46:46.469506 systemd-networkd[872]: enP1269s1: Link UP Feb 13 20:46:46.469680 systemd-networkd[872]: eth0: Link UP Feb 13 20:46:46.469889 systemd-networkd[872]: eth0: Gained carrier Feb 13 20:46:46.469907 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:46:46.477065 systemd-networkd[872]: enP1269s1: Gained carrier Feb 13 20:46:46.496280 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:46:47.800406 ignition[854]: Ignition 2.19.0 Feb 13 20:46:47.800418 ignition[854]: Stage: fetch-offline Feb 13 20:46:47.801966 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:46:47.800467 ignition[854]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.800477 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.800598 ignition[854]: parsed url from cmdline: "" Feb 13 20:46:47.800603 ignition[854]: no config URL provided Feb 13 20:46:47.815307 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:46:47.800611 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:46:47.800622 ignition[854]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:46:47.800629 ignition[854]: failed to fetch config: resource requires networking Feb 13 20:46:47.800865 ignition[854]: Ignition finished successfully Feb 13 20:46:47.839938 ignition[880]: Ignition 2.19.0 Feb 13 20:46:47.839950 ignition[880]: Stage: fetch Feb 13 20:46:47.840172 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.840200 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.840296 ignition[880]: parsed url from cmdline: "" Feb 13 20:46:47.840299 ignition[880]: no config URL provided Feb 13 20:46:47.840304 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:46:47.840311 ignition[880]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:46:47.840329 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:46:47.924076 ignition[880]: GET result: OK Feb 13 20:46:47.924236 ignition[880]: config has been read from IMDS userdata Feb 13 20:46:47.924278 ignition[880]: parsing config with SHA512: caf7d639cd3ccb6e0bfe49b1496754dfd89e09aeaa30d4c083ab40d8f354026c6b94b4caf45b4ee531b02b6dd42d66b550edfea4da86ad2a25aa688dadfc320f Feb 13 20:46:47.930776 unknown[880]: fetched base config from "system" Feb 13 20:46:47.930786 unknown[880]: fetched base config from "system" Feb 13 20:46:47.931252 ignition[880]: fetch: fetch complete Feb 13 20:46:47.930793 unknown[880]: fetched user config from "azure" Feb 13 20:46:47.931258 ignition[880]: fetch: fetch passed Feb 13 20:46:47.933570 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:46:47.931301 ignition[880]: Ignition finished successfully Feb 13 20:46:47.942409 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:46:47.960731 ignition[887]: Ignition 2.19.0 Feb 13 20:46:47.960742 ignition[887]: Stage: kargs Feb 13 20:46:47.960964 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.964021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:46:47.960977 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.961882 ignition[887]: kargs: kargs passed Feb 13 20:46:47.961923 ignition[887]: Ignition finished successfully Feb 13 20:46:47.978749 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:46:47.994668 ignition[893]: Ignition 2.19.0 Feb 13 20:46:47.994679 ignition[893]: Stage: disks Feb 13 20:46:47.996621 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:46:47.994901 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.999716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:46:47.994914 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:48.003201 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:46:47.995785 ignition[893]: disks: disks passed Feb 13 20:46:48.006398 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:46:47.995826 ignition[893]: Ignition finished successfully Feb 13 20:46:48.011155 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:46:48.013728 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:46:48.030321 systemd-networkd[872]: enP1269s1: Gained IPv6LL Feb 13 20:46:48.036414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:46:48.108270 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:46:48.113872 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:46:48.128325 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:46:48.218202 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:46:48.218639 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:46:48.221425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:46:48.283312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:46:48.289650 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:46:48.295899 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:46:48.301980 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:46:48.308912 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Feb 13 20:46:48.302020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:46:48.318454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:46:48.328723 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:48.328748 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:48.328760 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:48.332203 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:48.333947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:46:48.341334 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:46:48.478398 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 20:46:49.400837 coreos-metadata[914]: Feb 13 20:46:49.400 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:46:49.407628 coreos-metadata[914]: Feb 13 20:46:49.407 INFO Fetch successful Feb 13 20:46:49.407628 coreos-metadata[914]: Feb 13 20:46:49.407 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:46:49.427390 coreos-metadata[914]: Feb 13 20:46:49.427 INFO Fetch successful Feb 13 20:46:49.431050 coreos-metadata[914]: Feb 13 20:46:49.431 INFO wrote hostname ci-4081.3.1-a-faf44fbcb5 to /sysroot/etc/hostname Feb 13 20:46:49.437232 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:46:49.592084 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:46:49.629665 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:46:49.663083 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:46:49.697807 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:46:51.010420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:46:51.021428 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:46:51.028359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:46:51.039198 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:51.039964 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:46:51.067547 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:46:51.074127 ignition[1030]: INFO : Ignition 2.19.0 Feb 13 20:46:51.074127 ignition[1030]: INFO : Stage: mount Feb 13 20:46:51.080870 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:51.080870 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:51.080870 ignition[1030]: INFO : mount: mount passed Feb 13 20:46:51.080870 ignition[1030]: INFO : Ignition finished successfully Feb 13 20:46:51.076147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:46:51.099280 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:46:51.113344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:46:51.124198 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) Feb 13 20:46:51.128195 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:51.128228 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:51.132743 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:51.138198 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:51.139317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:46:51.162667 ignition[1059]: INFO : Ignition 2.19.0 Feb 13 20:46:51.164959 ignition[1059]: INFO : Stage: files Feb 13 20:46:51.164959 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:51.164959 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:51.172585 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:46:51.212609 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:46:51.216828 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:46:51.343675 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:46:51.347436 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:46:51.347436 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:46:51.344198 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 20:46:51.373078 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:46:51.601800 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:46:51.775734 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:46:52.208417 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:46:52.561434 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:52.561434 ignition[1059]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: files passed Feb 13 20:46:52.572952 ignition[1059]: INFO : Ignition finished successfully Feb 13 20:46:52.569762 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:46:52.600481 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:46:52.621757 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:46:52.628243 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:46:52.648343 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.648343 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.628372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:46:52.661345 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.641610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:46:52.646727 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:46:52.678348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:46:52.701844 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:46:52.701958 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:46:52.708027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:46:52.712858 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:46:52.717611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:46:52.725388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:46:52.739297 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:46:52.747345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:46:52.759016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:46:52.761946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:46:52.767427 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:46:52.774223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:46:52.774398 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:46:52.780010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:46:52.784883 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:46:52.789405 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:46:52.796369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:46:52.797435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:46:52.797881 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:46:52.798310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:46:52.798714 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:46:52.799133 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:46:52.799506 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:46:52.799870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:46:52.800006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:46:52.801097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:46:52.801933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:46:52.802296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:46:52.820737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:46:52.826437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:46:52.829023 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:46:52.843124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:46:52.852902 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:46:52.871175 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:46:52.871322 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:46:52.877967 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:46:52.878106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:46:52.893430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:46:52.897267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:46:52.898304 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:46:52.906442 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:46:52.920505 ignition[1112]: INFO : Ignition 2.19.0 Feb 13 20:46:52.920505 ignition[1112]: INFO : Stage: umount Feb 13 20:46:52.920505 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:52.920505 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:52.920505 ignition[1112]: INFO : umount: umount passed Feb 13 20:46:52.920505 ignition[1112]: INFO : Ignition finished successfully Feb 13 20:46:52.909738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:46:52.909948 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:46:52.913468 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:46:52.914489 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:46:52.922061 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:46:52.922137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:46:52.939856 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:46:52.939953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:46:52.945135 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:46:52.945192 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:46:52.957988 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:46:52.958043 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:46:52.962679 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:46:52.962723 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:46:52.967123 systemd[1]: Stopped target network.target - Network. Feb 13 20:46:52.969480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:46:52.969536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:46:52.974833 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:46:52.983804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:46:52.988472 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:46:52.991319 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:46:52.993484 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:46:52.996165 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:46:52.996233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:46:53.005503 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:46:53.005568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:46:53.010606 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:46:53.010672 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:46:53.011582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:46:53.011624 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:46:53.012112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:46:53.012815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:46:53.014259 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:46:53.030508 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:46:53.030616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:46:53.035533 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:46:53.035619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:46:53.036888 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 20:46:53.041818 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:46:53.041927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:46:53.048930 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:46:53.048980 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:46:53.072392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:46:53.079281 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:46:53.079364 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:46:53.084671 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:46:53.084717 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:46:53.101950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:46:53.104070 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:46:53.113474 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:46:53.135919 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:46:53.136083 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:46:53.145102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:46:53.145160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:46:53.150368 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:46:53.150409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:46:53.155205 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:46:53.155252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:46:53.174997 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: Data path switched from VF: enP1269s1 Feb 13 20:46:53.159852 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:46:53.159894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:46:53.164643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:46:53.164688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:53.184007 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:46:53.184250 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:46:53.184297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:46:53.184709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:53.184744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:53.202860 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:46:53.202988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:46:53.216289 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:46:53.216404 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:46:53.466471 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:46:53.466621 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:46:53.473550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:46:53.478604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:46:53.480956 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:46:53.490358 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:46:53.929236 systemd[1]: Switching root. Feb 13 20:46:53.959877 systemd-journald[176]: Journal stopped Feb 13 20:46:41.039386 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:46:41.039422 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.039502 kernel: BIOS-provided physical RAM map: Feb 13 20:46:41.039515 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 20:46:41.039525 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 20:46:41.039537 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 20:46:41.039551 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Feb 13 20:46:41.039588 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Feb 13 20:46:41.039601 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 20:46:41.039613 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 20:46:41.039626 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 20:46:41.039638 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 20:46:41.039650 kernel: printk: bootconsole [earlyser0] enabled Feb 13 20:46:41.039661 kernel: NX (Execute Disable) protection: active Feb 13 20:46:41.039680 kernel: APIC: Static calls initialized Feb 13 20:46:41.039694 kernel: efi: EFI v2.7 by Microsoft Feb 13 20:46:41.039708 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Feb 13 20:46:41.039721 kernel: SMBIOS 3.1.0 present. Feb 13 20:46:41.039735 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 20:46:41.039749 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 20:46:41.039763 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 20:46:41.039776 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 20:46:41.039790 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 20:46:41.039803 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 20:46:41.039818 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 20:46:41.039830 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:46:41.039842 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 20:46:41.039856 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 20:46:41.039868 kernel: tsc: Detected 2593.907 MHz processor Feb 13 20:46:41.039881 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:46:41.039893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:46:41.039904 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 20:46:41.039916 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 20:46:41.039931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:46:41.039943 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 20:46:41.039955 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 20:46:41.039967 kernel: Using GB pages for direct mapping Feb 13 20:46:41.039980 kernel: Secure boot disabled Feb 13 20:46:41.039992 kernel: ACPI: Early table checksum verification disabled Feb 13 20:46:41.040005 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 20:46:41.040024 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040040 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040053 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 20:46:41.040065 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 20:46:41.040079 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040094 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040108 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040126 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040140 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040154 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040168 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:46:41.040182 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 20:46:41.040196 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 20:46:41.040211 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 20:46:41.040225 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 20:46:41.040242 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 20:46:41.040256 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 20:46:41.040270 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 20:46:41.040284 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 20:46:41.040298 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 20:46:41.040313 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 20:46:41.040327 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:46:41.040341 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:46:41.040356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 20:46:41.040373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 20:46:41.040387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 20:46:41.040401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 20:46:41.040415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 20:46:41.040428 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 20:46:41.040442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 20:46:41.040457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 20:46:41.040471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 20:46:41.040485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 20:46:41.040502 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 20:46:41.040515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 20:46:41.040530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 20:46:41.040544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 20:46:41.040558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 20:46:41.040636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 20:46:41.040651 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 20:46:41.040666 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 13 20:46:41.040680 kernel: Zone ranges: Feb 13 20:46:41.040699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:46:41.040713 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:46:41.040727 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:46:41.040741 kernel: Movable zone start for each node Feb 13 20:46:41.040755 kernel: Early memory node ranges Feb 13 20:46:41.040769 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 20:46:41.040783 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 20:46:41.040797 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 20:46:41.040811 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 20:46:41.040829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 20:46:41.040843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:46:41.040856 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 20:46:41.040870 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 20:46:41.040884 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 20:46:41.040899 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 20:46:41.040914 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:46:41.040928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:46:41.040942 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:46:41.040959 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 20:46:41.040973 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:46:41.040987 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 20:46:41.041001 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 20:46:41.041015 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:46:41.041030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:46:41.041043 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:46:41.041057 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:46:41.041071 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:46:41.041088 kernel: Hyper-V: PV spinlocks enabled Feb 13 20:46:41.041102 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:46:41.041118 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.041132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:46:41.041145 kernel: random: crng init done Feb 13 20:46:41.041159 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:46:41.041173 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:46:41.041187 kernel: Fallback order for Node 0: 0 Feb 13 20:46:41.041205 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 20:46:41.041230 kernel: Policy zone: Normal Feb 13 20:46:41.041245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:46:41.041262 kernel: software IO TLB: area num 2. Feb 13 20:46:41.041278 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 310128K reserved, 0K cma-reserved) Feb 13 20:46:41.041293 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:46:41.041308 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:46:41.041322 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:46:41.041337 kernel: Dynamic Preempt: voluntary Feb 13 20:46:41.041352 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:46:41.041368 kernel: rcu: RCU event tracing is enabled. Feb 13 20:46:41.041387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:46:41.041402 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:46:41.041417 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:46:41.041431 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:46:41.041446 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:46:41.041463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:46:41.041478 kernel: Using NULL legacy PIC Feb 13 20:46:41.041493 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 20:46:41.041508 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:46:41.041523 kernel: Console: colour dummy device 80x25 Feb 13 20:46:41.041537 kernel: printk: console [tty1] enabled Feb 13 20:46:41.041552 kernel: printk: console [ttyS0] enabled Feb 13 20:46:41.041604 kernel: printk: bootconsole [earlyser0] disabled Feb 13 20:46:41.041621 kernel: ACPI: Core revision 20230628 Feb 13 20:46:41.041637 kernel: Failed to register legacy timer interrupt Feb 13 20:46:41.041657 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:46:41.041673 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:46:41.041688 kernel: Hyper-V: Using IPI hypercalls Feb 13 20:46:41.041704 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 20:46:41.041720 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 20:46:41.041736 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 20:46:41.041751 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 20:46:41.041767 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 20:46:41.041782 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 20:46:41.041800 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 13 20:46:41.041816 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:46:41.041831 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:46:41.041846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:46:41.041861 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:46:41.041877 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:46:41.041891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:46:41.041906 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:46:41.041921 kernel: RETBleed: Vulnerable Feb 13 20:46:41.041937 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:46:41.041951 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:46:41.041965 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:46:41.041979 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:46:41.041993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:46:41.042006 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:46:41.042020 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:46:41.042034 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:46:41.042047 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:46:41.042061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:46:41.042075 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 20:46:41.042092 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 20:46:41.042106 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 20:46:41.042119 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 20:46:41.042133 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:46:41.042147 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:46:41.042161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:46:41.042176 kernel: landlock: Up and running. Feb 13 20:46:41.042191 kernel: SELinux: Initializing. Feb 13 20:46:41.042207 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.042221 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.042235 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:46:41.042250 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042268 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042282 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:46:41.042297 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:46:41.042311 kernel: signal: max sigframe size: 3632 Feb 13 20:46:41.042326 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:46:41.042341 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:46:41.042355 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:46:41.042369 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:46:41.042383 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:46:41.042400 kernel: .... node #0, CPUs: #1 Feb 13 20:46:41.042415 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 20:46:41.042431 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:46:41.042445 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:46:41.042460 kernel: smpboot: Max logical packages: 1 Feb 13 20:46:41.042474 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 20:46:41.042488 kernel: devtmpfs: initialized Feb 13 20:46:41.042503 kernel: x86/mm: Memory block size: 128MB Feb 13 20:46:41.042520 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 20:46:41.042535 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:46:41.042550 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:46:41.042564 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:46:41.042599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:46:41.042614 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:46:41.042629 kernel: audit: type=2000 audit(1739479600.027:1): state=initialized audit_enabled=0 res=1 Feb 13 20:46:41.042644 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:46:41.042658 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:46:41.042676 kernel: cpuidle: using governor menu Feb 13 20:46:41.042691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:46:41.042705 kernel: dca service started, version 1.12.1 Feb 13 20:46:41.042720 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 20:46:41.042735 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:46:41.042750 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:46:41.042764 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:46:41.042779 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:46:41.042793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:46:41.042810 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:46:41.042825 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:46:41.042840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:46:41.042855 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:46:41.042869 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:46:41.042884 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:46:41.042899 kernel: ACPI: Interpreter enabled Feb 13 20:46:41.042914 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:46:41.042929 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:46:41.042946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:46:41.042961 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:46:41.042976 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 20:46:41.042991 kernel: iommu: Default domain type: Translated Feb 13 20:46:41.043006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:46:41.043021 kernel: efivars: Registered efivars operations Feb 13 20:46:41.043034 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:46:41.043047 kernel: PCI: System does not support PCI Feb 13 20:46:41.043062 kernel: vgaarb: loaded Feb 13 20:46:41.043081 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 20:46:41.043096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:46:41.043112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:46:41.043128 kernel: pnp: PnP ACPI init Feb 13 20:46:41.043144 kernel: pnp: PnP ACPI: found 3 devices Feb 13 20:46:41.043160 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:46:41.043175 kernel: NET: Registered PF_INET protocol family Feb 13 20:46:41.043191 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:46:41.043207 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:46:41.043225 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:46:41.043240 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:46:41.043254 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:46:41.043269 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:46:41.043283 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.043298 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:46:41.043313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:46:41.043327 kernel: NET: Registered PF_XDP protocol family Feb 13 20:46:41.043341 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:46:41.043359 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:46:41.043373 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Feb 13 20:46:41.043387 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:46:41.043402 kernel: Initialise system trusted keyrings Feb 13 20:46:41.043416 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:46:41.043430 kernel: Key type asymmetric registered Feb 13 20:46:41.043444 kernel: Asymmetric key parser 'x509' registered Feb 13 20:46:41.043459 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:46:41.043473 kernel: io scheduler mq-deadline registered Feb 13 20:46:41.043490 kernel: io scheduler kyber registered Feb 13 20:46:41.043504 kernel: io scheduler bfq registered Feb 13 20:46:41.043520 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:46:41.043534 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:46:41.043549 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:46:41.043564 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:46:41.043590 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:46:41.047080 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 20:46:41.047219 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T20:46:40 UTC (1739479600) Feb 13 20:46:41.047335 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 20:46:41.047356 kernel: intel_pstate: CPU model not supported Feb 13 20:46:41.047370 kernel: efifb: probing for efifb Feb 13 20:46:41.047386 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:46:41.047402 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:46:41.047417 kernel: efifb: scrolling: redraw Feb 13 20:46:41.047432 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:46:41.047451 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:46:41.047466 kernel: fb0: EFI VGA frame buffer device Feb 13 20:46:41.047481 kernel: pstore: Using crash dump compression: deflate Feb 13 20:46:41.047496 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:46:41.047510 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:46:41.047524 kernel: Segment Routing with IPv6 Feb 13 20:46:41.047539 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:46:41.047554 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:46:41.047584 kernel: Key type dns_resolver registered Feb 13 20:46:41.047604 kernel: IPI shorthand broadcast: enabled Feb 13 20:46:41.047619 kernel: sched_clock: Marking stable (788002300, 42280900)->(1028633900, -198350700) Feb 13 20:46:41.047634 kernel: registered taskstats version 1 Feb 13 20:46:41.047648 kernel: Loading compiled-in X.509 certificates Feb 13 20:46:41.047663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:46:41.047677 kernel: Key type .fscrypt registered Feb 13 20:46:41.047691 kernel: Key type fscrypt-provisioning registered Feb 13 20:46:41.047705 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:46:41.047721 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:46:41.047739 kernel: ima: No architecture policies found Feb 13 20:46:41.047755 kernel: clk: Disabling unused clocks Feb 13 20:46:41.047769 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:46:41.047784 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:46:41.047800 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:46:41.047814 kernel: Run /init as init process Feb 13 20:46:41.047828 kernel: with arguments: Feb 13 20:46:41.047843 kernel: /init Feb 13 20:46:41.047857 kernel: with environment: Feb 13 20:46:41.047872 kernel: HOME=/ Feb 13 20:46:41.047892 kernel: TERM=linux Feb 13 20:46:41.047907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:46:41.047925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:46:41.047946 systemd[1]: Detected virtualization microsoft. Feb 13 20:46:41.047963 systemd[1]: Detected architecture x86-64. Feb 13 20:46:41.047980 systemd[1]: Running in initrd. Feb 13 20:46:41.047997 systemd[1]: No hostname configured, using default hostname. Feb 13 20:46:41.048015 systemd[1]: Hostname set to . Feb 13 20:46:41.048032 systemd[1]: Initializing machine ID from random generator. Feb 13 20:46:41.048048 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:46:41.048064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:46:41.048082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:46:41.048100 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:46:41.048118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:46:41.048135 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:46:41.048154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:46:41.048171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:46:41.048186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:46:41.048201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:46:41.048216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:46:41.048230 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:46:41.048245 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:46:41.048264 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:46:41.048280 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:46:41.048296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:46:41.048312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:46:41.048327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:46:41.048343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:46:41.048359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:46:41.048376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:46:41.048394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:46:41.048410 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:46:41.048426 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:46:41.048442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:46:41.048458 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:46:41.048474 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:46:41.048490 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:46:41.048506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:46:41.048544 systemd-journald[176]: Collecting audit messages is disabled. Feb 13 20:46:41.048604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:41.048620 systemd-journald[176]: Journal started Feb 13 20:46:41.048656 systemd-journald[176]: Runtime Journal (/run/log/journal/8f5e71bcde2045beb057eb63a1bf42b0) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:46:41.060088 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:46:41.062626 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:46:41.067974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:46:41.071360 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:46:41.078071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:41.082652 systemd-modules-load[177]: Inserted module 'overlay' Feb 13 20:46:41.093833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:41.107961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:46:41.117446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:46:41.119674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:46:41.122387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:46:41.148657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:46:41.156851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:46:41.156916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:46:41.163886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:41.172204 systemd-modules-load[177]: Inserted module 'br_netfilter' Feb 13 20:46:41.174340 kernel: Bridge firewalling registered Feb 13 20:46:41.178770 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:46:41.183872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:46:41.190765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:46:41.197278 dracut-cmdline[207]: dracut-dracut-053 Feb 13 20:46:41.200099 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:46:41.229088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:46:41.242170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:46:41.280742 systemd-resolved[262]: Positive Trust Anchors: Feb 13 20:46:41.280759 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:46:41.280814 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:46:41.308304 kernel: SCSI subsystem initialized Feb 13 20:46:41.291541 systemd-resolved[262]: Defaulting to hostname 'linux'. Feb 13 20:46:41.308033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:46:41.320610 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:46:41.317716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:46:41.330589 kernel: iscsi: registered transport (tcp) Feb 13 20:46:41.352493 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:46:41.352556 kernel: QLogic iSCSI HBA Driver Feb 13 20:46:41.387504 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:46:41.395722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:46:41.423003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:46:41.423078 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:46:41.426220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:46:41.465596 kernel: raid6: avx512x4 gen() 18512 MB/s Feb 13 20:46:41.484587 kernel: raid6: avx512x2 gen() 18728 MB/s Feb 13 20:46:41.503584 kernel: raid6: avx512x1 gen() 18647 MB/s Feb 13 20:46:41.521584 kernel: raid6: avx2x4 gen() 18694 MB/s Feb 13 20:46:41.540583 kernel: raid6: avx2x2 gen() 18514 MB/s Feb 13 20:46:41.560151 kernel: raid6: avx2x1 gen() 13888 MB/s Feb 13 20:46:41.560197 kernel: raid6: using algorithm avx512x2 gen() 18728 MB/s Feb 13 20:46:41.581438 kernel: raid6: .... xor() 29873 MB/s, rmw enabled Feb 13 20:46:41.581471 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:46:41.603590 kernel: xor: automatically using best checksumming function avx Feb 13 20:46:41.754599 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:46:41.764144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:46:41.774710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:46:41.791192 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 20:46:41.795623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:46:41.809719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:46:41.823976 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:46:41.849849 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:46:41.860867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:46:41.899286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:46:41.915297 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:46:41.948485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:46:41.955236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:46:41.961914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:46:41.967806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:46:41.978651 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:46:41.981858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:46:42.006586 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 20:46:42.019457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:46:42.028613 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:46:42.028649 kernel: AES CTR mode by8 optimization enabled Feb 13 20:46:42.043718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:46:42.044925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:42.050886 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:42.064000 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:46:42.051343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:42.088701 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:46:42.088728 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 20:46:42.088755 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:46:42.088774 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:46:42.074560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.079411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.099719 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:46:42.099746 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:46:42.104178 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 20:46:42.104609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.121559 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:46:42.121735 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:46:42.117818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:42.136314 kernel: PTP clock support registered Feb 13 20:46:42.136347 kernel: scsi host0: storvsc_host_t Feb 13 20:46:42.136555 kernel: scsi host1: storvsc_host_t Feb 13 20:46:42.136611 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:46:42.117902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.139439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:46:42.153668 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:46:42.153897 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:46:42.153952 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:46:42.155663 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:46:42.159285 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:46:42.159313 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:46:42.840719 systemd-resolved[262]: Clock change detected. Flushing caches. Feb 13 20:46:42.862489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:42.874556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:46:42.890882 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:46:42.893556 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:46:42.893581 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:46:42.905601 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:46:42.922755 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:46:42.922984 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:46:42.923160 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:46:42.923339 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:46:42.923495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:42.923514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:46:42.907267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:43.005674 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: VF slot 1 added Feb 13 20:46:43.016122 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:46:43.016170 kernel: hv_pci 1ad5b0c9-04f5-4263-b0bd-1d4ab754703f: PCI VMBus probing: Using version 0x10004 Feb 13 20:46:43.058777 kernel: hv_pci 1ad5b0c9-04f5-4263-b0bd-1d4ab754703f: PCI host bridge to bus 04f5:00 Feb 13 20:46:43.058970 kernel: pci_bus 04f5:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 20:46:43.059159 kernel: pci_bus 04f5:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:46:43.059339 kernel: pci 04f5:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 20:46:43.059537 kernel: pci 04f5:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:46:43.059705 kernel: pci 04f5:00:02.0: enabling Extended Tags Feb 13 20:46:43.059874 kernel: pci 04f5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 04f5:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 20:46:43.060045 kernel: pci_bus 04f5:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:46:43.060751 kernel: pci 04f5:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 20:46:43.229310 kernel: mlx5_core 04f5:00:02.0: enabling device (0000 -> 0002) Feb 13 20:46:43.452570 kernel: mlx5_core 04f5:00:02.0: firmware version: 14.30.5000 Feb 13 20:46:43.453042 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: VF registering: eth1 Feb 13 20:46:43.453243 kernel: mlx5_core 04f5:00:02.0 eth1: joined to eth0 Feb 13 20:46:43.453555 kernel: mlx5_core 04f5:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:46:43.459196 kernel: mlx5_core 04f5:00:02.0 enP1269s1: renamed from eth1 Feb 13 20:46:43.851898 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:46:43.973955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:46:44.020215 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (440) Feb 13 20:46:44.034782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:46:44.062204 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (443) Feb 13 20:46:44.075951 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:46:44.080421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:46:44.093343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:46:44.108124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:44.113228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:45.120463 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:46:45.120528 disk-uuid[602]: The operation has completed successfully. Feb 13 20:46:45.197022 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:46:45.197132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:46:45.213318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:46:45.219351 sh[688]: Success Feb 13 20:46:45.266671 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:46:45.589344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:46:45.603296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:46:45.605928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:46:45.628637 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:46:45.628693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:45.632194 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:46:45.634935 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:46:45.637441 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:46:46.229001 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:46:46.232088 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:46:46.243420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:46:46.248347 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:46:46.269468 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:46.269515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:46.269538 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:46.295415 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:46.305668 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:46:46.312199 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:46.318517 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:46:46.329389 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:46:46.341770 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:46:46.349383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:46:46.370134 systemd-networkd[872]: lo: Link UP Feb 13 20:46:46.370143 systemd-networkd[872]: lo: Gained carrier Feb 13 20:46:46.372159 systemd-networkd[872]: Enumeration completed Feb 13 20:46:46.372262 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:46:46.373054 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:46:46.373057 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:46:46.375536 systemd[1]: Reached target network.target - Network. Feb 13 20:46:46.435225 kernel: mlx5_core 04f5:00:02.0 enP1269s1: Link up Feb 13 20:46:46.469213 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: Data path switched to VF: enP1269s1 Feb 13 20:46:46.469506 systemd-networkd[872]: enP1269s1: Link UP Feb 13 20:46:46.469680 systemd-networkd[872]: eth0: Link UP Feb 13 20:46:46.469889 systemd-networkd[872]: eth0: Gained carrier Feb 13 20:46:46.469907 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:46:46.477065 systemd-networkd[872]: enP1269s1: Gained carrier Feb 13 20:46:46.496280 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:46:47.800406 ignition[854]: Ignition 2.19.0 Feb 13 20:46:47.800418 ignition[854]: Stage: fetch-offline Feb 13 20:46:47.801966 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:46:47.800467 ignition[854]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.800477 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.800598 ignition[854]: parsed url from cmdline: "" Feb 13 20:46:47.800603 ignition[854]: no config URL provided Feb 13 20:46:47.815307 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:46:47.800611 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:46:47.800622 ignition[854]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:46:47.800629 ignition[854]: failed to fetch config: resource requires networking Feb 13 20:46:47.800865 ignition[854]: Ignition finished successfully Feb 13 20:46:47.839938 ignition[880]: Ignition 2.19.0 Feb 13 20:46:47.839950 ignition[880]: Stage: fetch Feb 13 20:46:47.840172 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.840200 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.840296 ignition[880]: parsed url from cmdline: "" Feb 13 20:46:47.840299 ignition[880]: no config URL provided Feb 13 20:46:47.840304 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:46:47.840311 ignition[880]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:46:47.840329 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:46:47.924076 ignition[880]: GET result: OK Feb 13 20:46:47.924236 ignition[880]: config has been read from IMDS userdata Feb 13 20:46:47.924278 ignition[880]: parsing config with SHA512: caf7d639cd3ccb6e0bfe49b1496754dfd89e09aeaa30d4c083ab40d8f354026c6b94b4caf45b4ee531b02b6dd42d66b550edfea4da86ad2a25aa688dadfc320f Feb 13 20:46:47.930776 unknown[880]: fetched base config from "system" Feb 13 20:46:47.930786 unknown[880]: fetched base config from "system" Feb 13 20:46:47.931252 ignition[880]: fetch: fetch complete Feb 13 20:46:47.930793 unknown[880]: fetched user config from "azure" Feb 13 20:46:47.931258 ignition[880]: fetch: fetch passed Feb 13 20:46:47.933570 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:46:47.931301 ignition[880]: Ignition finished successfully Feb 13 20:46:47.942409 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:46:47.960731 ignition[887]: Ignition 2.19.0 Feb 13 20:46:47.960742 ignition[887]: Stage: kargs Feb 13 20:46:47.960964 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.964021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:46:47.960977 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:47.961882 ignition[887]: kargs: kargs passed Feb 13 20:46:47.961923 ignition[887]: Ignition finished successfully Feb 13 20:46:47.978749 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:46:47.994668 ignition[893]: Ignition 2.19.0 Feb 13 20:46:47.994679 ignition[893]: Stage: disks Feb 13 20:46:47.996621 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:46:47.994901 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:47.999716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:46:47.994914 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:48.003201 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:46:47.995785 ignition[893]: disks: disks passed Feb 13 20:46:48.006398 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:46:47.995826 ignition[893]: Ignition finished successfully Feb 13 20:46:48.011155 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:46:48.013728 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:46:48.030321 systemd-networkd[872]: enP1269s1: Gained IPv6LL Feb 13 20:46:48.036414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:46:48.108270 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:46:48.113872 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:46:48.128325 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:46:48.218202 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:46:48.218639 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:46:48.221425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:46:48.283312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:46:48.289650 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:46:48.295899 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:46:48.301980 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:46:48.308912 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Feb 13 20:46:48.302020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:46:48.318454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:46:48.328723 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:48.328748 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:48.328760 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:48.332203 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:48.333947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:46:48.341334 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:46:48.478398 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 20:46:49.400837 coreos-metadata[914]: Feb 13 20:46:49.400 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:46:49.407628 coreos-metadata[914]: Feb 13 20:46:49.407 INFO Fetch successful Feb 13 20:46:49.407628 coreos-metadata[914]: Feb 13 20:46:49.407 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:46:49.427390 coreos-metadata[914]: Feb 13 20:46:49.427 INFO Fetch successful Feb 13 20:46:49.431050 coreos-metadata[914]: Feb 13 20:46:49.431 INFO wrote hostname ci-4081.3.1-a-faf44fbcb5 to /sysroot/etc/hostname Feb 13 20:46:49.437232 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:46:49.592084 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:46:49.629665 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:46:49.663083 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:46:49.697807 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:46:51.010420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:46:51.021428 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:46:51.028359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:46:51.039198 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:51.039964 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:46:51.067547 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:46:51.074127 ignition[1030]: INFO : Ignition 2.19.0 Feb 13 20:46:51.074127 ignition[1030]: INFO : Stage: mount Feb 13 20:46:51.080870 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:51.080870 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:51.080870 ignition[1030]: INFO : mount: mount passed Feb 13 20:46:51.080870 ignition[1030]: INFO : Ignition finished successfully Feb 13 20:46:51.076147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:46:51.099280 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:46:51.113344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:46:51.124198 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) Feb 13 20:46:51.128195 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:46:51.128228 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:46:51.132743 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:46:51.138198 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:46:51.139317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:46:51.162667 ignition[1059]: INFO : Ignition 2.19.0 Feb 13 20:46:51.164959 ignition[1059]: INFO : Stage: files Feb 13 20:46:51.164959 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:51.164959 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:51.172585 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:46:51.212609 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:46:51.216828 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:46:51.343675 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:46:51.347436 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:46:51.347436 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:46:51.344198 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 20:46:51.373078 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:46:51.377871 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:46:51.601800 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:46:51.775734 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:51.781798 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:46:52.208417 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:46:52.561434 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:46:52.561434 ignition[1059]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:46:52.572952 ignition[1059]: INFO : files: files passed Feb 13 20:46:52.572952 ignition[1059]: INFO : Ignition finished successfully Feb 13 20:46:52.569762 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:46:52.600481 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:46:52.621757 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:46:52.628243 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:46:52.648343 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.648343 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.628372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:46:52.661345 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:46:52.641610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:46:52.646727 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:46:52.678348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:46:52.701844 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:46:52.701958 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:46:52.708027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:46:52.712858 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:46:52.717611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:46:52.725388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:46:52.739297 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:46:52.747345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:46:52.759016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:46:52.761946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:46:52.767427 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:46:52.774223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:46:52.774398 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:46:52.780010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:46:52.784883 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:46:52.789405 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:46:52.796369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:46:52.797435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:46:52.797881 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:46:52.798310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:46:52.798714 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:46:52.799133 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:46:52.799506 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:46:52.799870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:46:52.800006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:46:52.801097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:46:52.801933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:46:52.802296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:46:52.820737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:46:52.826437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:46:52.829023 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:46:52.843124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:46:52.852902 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:46:52.871175 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:46:52.871322 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:46:52.877967 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:46:52.878106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:46:52.893430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:46:52.897267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:46:52.898304 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:46:52.906442 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:46:52.920505 ignition[1112]: INFO : Ignition 2.19.0 Feb 13 20:46:52.920505 ignition[1112]: INFO : Stage: umount Feb 13 20:46:52.920505 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:46:52.920505 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:46:52.920505 ignition[1112]: INFO : umount: umount passed Feb 13 20:46:52.920505 ignition[1112]: INFO : Ignition finished successfully Feb 13 20:46:52.909738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:46:52.909948 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:46:52.913468 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:46:52.914489 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:46:52.922061 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:46:52.922137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:46:52.939856 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:46:52.939953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:46:52.945135 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:46:52.945192 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:46:52.957988 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:46:52.958043 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:46:52.962679 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:46:52.962723 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:46:52.967123 systemd[1]: Stopped target network.target - Network. Feb 13 20:46:52.969480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:46:52.969536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:46:52.974833 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:46:52.983804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:46:52.988472 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:46:52.991319 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:46:52.993484 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:46:52.996165 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:46:52.996233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:46:53.005503 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:46:53.005568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:46:53.010606 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:46:53.010672 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:46:53.011582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:46:53.011624 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:46:53.012112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:46:53.012815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:46:53.014259 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:46:53.030508 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:46:53.030616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:46:53.035533 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:46:53.035619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:46:53.036888 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 20:46:53.041818 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:46:53.041927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:46:53.048930 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:46:53.048980 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:46:53.072392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:46:53.079281 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:46:53.079364 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:46:53.084671 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:46:53.084717 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:46:53.101950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:46:53.104070 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:46:53.113474 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:46:53.135919 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:46:53.136083 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:46:53.145102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:46:53.145160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:46:53.150368 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:46:53.150409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:46:53.155205 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:46:53.155252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:46:53.174997 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: Data path switched from VF: enP1269s1 Feb 13 20:46:53.159852 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:46:53.159894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:46:53.164643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:46:53.164688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:46:53.184007 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:46:53.184250 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:46:53.184297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:46:53.184709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:46:53.184744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:46:53.202860 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:46:53.202988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:46:53.216289 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:46:53.216404 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:46:53.466471 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:46:53.466621 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:46:53.473550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:46:53.478604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:46:53.480956 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:46:53.490358 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:46:53.929236 systemd[1]: Switching root. Feb 13 20:46:53.959877 systemd-journald[176]: Journal stopped Feb 13 20:46:59.431928 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Feb 13 20:46:59.431969 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:46:59.431986 kernel: SELinux: policy capability open_perms=1 Feb 13 20:46:59.432000 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:46:59.432013 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:46:59.432027 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:46:59.432043 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:46:59.432060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:46:59.432075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:46:59.432089 kernel: audit: type=1403 audit(1739479615.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:46:59.432105 systemd[1]: Successfully loaded SELinux policy in 169.075ms. Feb 13 20:46:59.432122 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.544ms. Feb 13 20:46:59.432139 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:46:59.432155 systemd[1]: Detected virtualization microsoft. Feb 13 20:46:59.432175 systemd[1]: Detected architecture x86-64. Feb 13 20:46:59.448156 systemd[1]: Detected first boot. Feb 13 20:46:59.448205 systemd[1]: Hostname set to . Feb 13 20:46:59.448225 systemd[1]: Initializing machine ID from random generator. Feb 13 20:46:59.448240 zram_generator::config[1171]: No configuration found. Feb 13 20:46:59.448265 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:46:59.448280 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:46:59.448296 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:46:59.448313 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:46:59.448333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:46:59.448348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:46:59.448367 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:46:59.448386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:46:59.448404 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:46:59.448421 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:46:59.448436 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:46:59.448452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:46:59.448468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:46:59.448485 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:46:59.448506 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:46:59.448522 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:46:59.448538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:46:59.448555 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:46:59.448570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:46:59.448591 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:46:59.448607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:46:59.448630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:46:59.448649 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:46:59.448670 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:46:59.448689 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:46:59.448709 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:46:59.448727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:46:59.448745 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:46:59.448762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:46:59.448780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:46:59.448800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:46:59.448817 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:46:59.448834 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:46:59.448853 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:46:59.448871 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:46:59.448892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:46:59.448909 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:46:59.448928 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:46:59.448946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:46:59.448963 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:46:59.448980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:46:59.449000 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:46:59.449019 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:46:59.449039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:46:59.449058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:46:59.449077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:46:59.449096 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:46:59.449114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:46:59.449133 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:46:59.449152 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:46:59.449171 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:46:59.456142 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:46:59.456175 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:46:59.456215 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:46:59.456233 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:46:59.456286 systemd-journald[1274]: Collecting audit messages is disabled. Feb 13 20:46:59.456320 kernel: loop: module loaded Feb 13 20:46:59.456345 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:46:59.456366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:46:59.456377 kernel: fuse: init (API version 7.39) Feb 13 20:46:59.456400 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:46:59.456424 systemd-journald[1274]: Journal started Feb 13 20:46:59.456458 systemd-journald[1274]: Runtime Journal (/run/log/journal/22690bceb1aa4cad95a72943df26f77b) is 8.0M, max 158.8M, 150.8M free. Feb 13 20:46:59.470154 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:46:59.473372 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:46:59.476455 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:46:59.478853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:46:59.481738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:46:59.484586 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:46:59.487562 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:46:59.491580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:46:59.496771 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:46:59.496967 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:46:59.504766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:46:59.504966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:46:59.508119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:46:59.508320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:46:59.511690 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:46:59.511875 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:46:59.515243 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:46:59.515499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:46:59.518562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:46:59.523884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:46:59.528663 kernel: ACPI: bus type drm_connector registered Feb 13 20:46:59.529453 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:46:59.533374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:46:59.536816 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:46:59.557385 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:46:59.567319 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:46:59.576280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:46:59.580688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:46:59.589326 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:46:59.602344 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:46:59.605825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:46:59.614007 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:46:59.616881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:46:59.618334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:46:59.627418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:46:59.632449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:46:59.635901 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:46:59.639437 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:46:59.642819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:46:59.648142 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:46:59.657429 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:46:59.660813 systemd-journald[1274]: Time spent on flushing to /var/log/journal/22690bceb1aa4cad95a72943df26f77b is 18.623ms for 950 entries. Feb 13 20:46:59.660813 systemd-journald[1274]: System Journal (/var/log/journal/22690bceb1aa4cad95a72943df26f77b) is 8.0M, max 2.6G, 2.6G free. Feb 13 20:46:59.765767 systemd-journald[1274]: Received client request to flush runtime journal. Feb 13 20:46:59.681997 udevadm[1339]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:46:59.768035 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:46:59.774894 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Feb 13 20:46:59.774918 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Feb 13 20:46:59.782454 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:46:59.791398 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:46:59.794364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:46:59.992779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:47:00.001373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:47:00.019451 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Feb 13 20:47:00.019478 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Feb 13 20:47:00.024821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:47:01.109787 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:47:01.122520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:47:01.146113 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Feb 13 20:47:01.460802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:47:01.474348 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:47:01.532447 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 20:47:01.634753 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:47:01.645155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:47:01.660969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:47:01.684262 kernel: hv_vmbus: registering driver hv_balloon Feb 13 20:47:01.684337 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 20:47:01.692259 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 20:47:01.695966 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 20:47:01.701201 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 20:47:01.705062 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:47:01.712946 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:47:01.720983 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:47:01.728719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:47:01.729036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:47:01.746486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:47:01.908116 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1369) Feb 13 20:47:01.994757 systemd-networkd[1365]: lo: Link UP Feb 13 20:47:01.994771 systemd-networkd[1365]: lo: Gained carrier Feb 13 20:47:02.008002 systemd-networkd[1365]: Enumeration completed Feb 13 20:47:02.011278 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:47:02.014424 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:47:02.014432 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:47:02.030841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:47:02.069700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:47:02.079195 kernel: mlx5_core 04f5:00:02.0 enP1269s1: Link up Feb 13 20:47:02.098306 kernel: hv_netvsc 7c1e521f-9da2-7c1e-521f-9da27c1e521f eth0: Data path switched to VF: enP1269s1 Feb 13 20:47:02.102695 systemd-networkd[1365]: enP1269s1: Link UP Feb 13 20:47:02.103250 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 20:47:02.103528 systemd-networkd[1365]: eth0: Link UP Feb 13 20:47:02.103603 systemd-networkd[1365]: eth0: Gained carrier Feb 13 20:47:02.104821 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:47:02.112888 systemd-networkd[1365]: enP1269s1: Gained carrier Feb 13 20:47:02.123226 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:47:02.301110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:47:02.307403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:47:02.438265 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:47:02.448712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:47:02.466349 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:47:02.470997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:47:02.481341 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:47:02.485829 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:47:02.515984 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:47:02.520947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:47:02.524527 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:47:02.524557 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:47:02.527373 systemd[1]: Reached target machines.target - Containers. Feb 13 20:47:02.530584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:47:02.540401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:47:02.544947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:47:02.550165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:47:02.560361 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:47:02.567351 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:47:02.575493 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:47:02.579632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:47:02.627821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:47:02.643198 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:47:02.667429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:47:02.668438 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:47:02.757259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:47:02.776235 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 20:47:02.809384 kernel: loop2: detected capacity change from 0 to 31056 Feb 13 20:47:02.921207 kernel: loop3: detected capacity change from 0 to 140768 Feb 13 20:47:03.019205 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:47:03.074207 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 20:47:03.081206 kernel: loop6: detected capacity change from 0 to 31056 Feb 13 20:47:03.087216 kernel: loop7: detected capacity change from 0 to 140768 Feb 13 20:47:03.098650 (sd-merge)[1476]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 20:47:03.099256 (sd-merge)[1476]: Merged extensions into '/usr'. Feb 13 20:47:03.103122 systemd[1]: Reloading requested from client PID 1463 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:47:03.103139 systemd[1]: Reloading... Feb 13 20:47:03.151216 zram_generator::config[1500]: No configuration found. Feb 13 20:47:03.262400 systemd-networkd[1365]: enP1269s1: Gained IPv6LL Feb 13 20:47:03.262663 systemd-networkd[1365]: eth0: Gained IPv6LL Feb 13 20:47:03.316595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:47:03.387421 systemd[1]: Reloading finished in 283 ms. Feb 13 20:47:03.403071 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:47:03.406900 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:47:03.416373 systemd[1]: Starting ensure-sysext.service... Feb 13 20:47:03.421348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:47:03.433346 systemd[1]: Reloading requested from client PID 1570 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:47:03.433485 systemd[1]: Reloading... Feb 13 20:47:03.447639 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:47:03.448156 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:47:03.449438 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:47:03.449872 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Feb 13 20:47:03.449968 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Feb 13 20:47:03.463860 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:47:03.463873 systemd-tmpfiles[1571]: Skipping /boot Feb 13 20:47:03.475952 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:47:03.476088 systemd-tmpfiles[1571]: Skipping /boot Feb 13 20:47:03.529307 zram_generator::config[1602]: No configuration found. Feb 13 20:47:03.656059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:47:03.728259 systemd[1]: Reloading finished in 294 ms. Feb 13 20:47:03.743831 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:47:03.757373 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:47:03.768371 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:47:03.782403 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:47:03.788341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:47:03.798320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:47:03.806035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:03.806793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:47:03.814080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:47:03.823845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:47:03.847305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:47:03.849846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:47:03.850014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:03.851228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:47:03.851442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:47:03.863533 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:47:03.863749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:47:03.882843 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:47:03.888269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:47:03.889089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:47:03.904978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:03.905406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:47:03.912444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:47:03.920217 augenrules[1703]: No rules Feb 13 20:47:03.923896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:47:03.928532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:47:03.928698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:47:03.928826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:03.933106 systemd-resolved[1678]: Positive Trust Anchors: Feb 13 20:47:03.933722 systemd-resolved[1678]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:47:03.933784 systemd-resolved[1678]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:47:03.937299 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:47:03.940980 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:47:03.944593 systemd-resolved[1678]: Using system hostname 'ci-4081.3.1-a-faf44fbcb5'. Feb 13 20:47:03.944808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:47:03.945015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:47:03.954392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:47:03.958277 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:47:03.958513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:47:03.971046 systemd[1]: Reached target network.target - Network. Feb 13 20:47:03.973506 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:47:03.976227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:47:03.979328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:03.979554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:47:03.985362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:47:03.991363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:47:03.995395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:47:04.007314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:47:04.009878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:47:04.009954 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:47:04.012476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:47:04.013396 systemd[1]: Finished ensure-sysext.service. Feb 13 20:47:04.016568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:47:04.016765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:47:04.019894 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:47:04.020059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:47:04.022918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:47:04.023085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:47:04.026577 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:47:04.026807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:47:04.035370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:47:04.035570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:47:04.238325 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:47:04.242224 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:47:09.876070 ldconfig[1459]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:47:09.884707 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:47:09.895387 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:47:09.908664 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:47:09.911774 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:47:09.914634 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:47:09.917853 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:47:09.921048 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:47:09.923702 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:47:09.927051 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:47:09.930169 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:47:09.930225 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:47:09.932450 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:47:09.935805 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:47:09.939932 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:47:09.943887 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:47:09.949072 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:47:09.951730 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:47:09.954103 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:47:09.956897 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:47:09.956948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:47:09.956981 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:47:09.959194 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 20:47:09.964282 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:47:09.971432 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:47:09.981408 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:47:09.988417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:47:10.003382 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:47:10.006266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:47:10.006313 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 20:47:10.010396 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 20:47:10.020175 jq[1746]: false Feb 13 20:47:10.017419 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 20:47:10.025771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:47:10.036339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:47:10.042921 (chronyd)[1742]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 20:47:10.047378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:47:10.051816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:47:10.059300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:47:10.072344 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:47:10.075345 chronyd[1765]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 20:47:10.083761 KVP[1751]: KVP starting; pid is:1751 Feb 13 20:47:10.088224 chronyd[1765]: Timezone right/UTC failed leap second check, ignoring Feb 13 20:47:10.087381 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:47:10.088385 chronyd[1765]: Loaded seccomp filter (level 2) Feb 13 20:47:10.102663 kernel: hv_utils: KVP IC version 4.0 Feb 13 20:47:10.097340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:47:10.103016 KVP[1751]: KVP LIC Version: 3.1 Feb 13 20:47:10.108511 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:47:10.115501 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:47:10.124700 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 20:47:10.134555 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:47:10.134851 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:47:10.137558 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:47:10.137856 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:47:10.154388 jq[1777]: true Feb 13 20:47:10.149667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:47:10.149951 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:47:10.163249 extend-filesystems[1749]: Found loop4 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found loop5 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found loop6 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found loop7 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda1 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda2 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda3 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found usr Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda4 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda6 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda7 Feb 13 20:47:10.163249 extend-filesystems[1749]: Found sda9 Feb 13 20:47:10.163249 extend-filesystems[1749]: Checking size of /dev/sda9 Feb 13 20:47:10.223759 extend-filesystems[1749]: Old size kept for /dev/sda9 Feb 13 20:47:10.223759 extend-filesystems[1749]: Found sr0 Feb 13 20:47:10.163940 dbus-daemon[1745]: [system] SELinux support is enabled Feb 13 20:47:10.179446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:47:10.238910 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:47:10.241653 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:47:10.265882 (ntainerd)[1789]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:47:10.267126 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:47:10.267171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:47:10.271664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:47:10.271690 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:47:10.273639 update_engine[1774]: I20250213 20:47:10.273566 1774 main.cc:92] Flatcar Update Engine starting Feb 13 20:47:10.300550 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:47:10.306208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:47:10.311225 jq[1788]: true Feb 13 20:47:10.324505 update_engine[1774]: I20250213 20:47:10.309870 1774 update_check_scheduler.cc:74] Next update check in 11m24s Feb 13 20:47:10.312344 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:47:10.321784 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:47:10.336323 tar[1782]: linux-amd64/helm Feb 13 20:47:10.352708 systemd-logind[1766]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:47:10.360742 systemd-logind[1766]: New seat seat0. Feb 13 20:47:10.361509 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:47:10.395567 coreos-metadata[1744]: Feb 13 20:47:10.395 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:47:10.400735 coreos-metadata[1744]: Feb 13 20:47:10.400 INFO Fetch successful Feb 13 20:47:10.402021 coreos-metadata[1744]: Feb 13 20:47:10.401 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 20:47:10.409927 coreos-metadata[1744]: Feb 13 20:47:10.409 INFO Fetch successful Feb 13 20:47:10.412889 coreos-metadata[1744]: Feb 13 20:47:10.410 INFO Fetching http://168.63.129.16/machine/9d8f6440-dfbe-46a5-bce0-67a2eedf0775/8eb6a154%2D5be3%2D44d7%2Dacb6%2D0ac9a23d2fdb.%5Fci%2D4081.3.1%2Da%2Dfaf44fbcb5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 20:47:10.412889 coreos-metadata[1744]: Feb 13 20:47:10.412 INFO Fetch successful Feb 13 20:47:10.414247 coreos-metadata[1744]: Feb 13 20:47:10.413 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:47:10.430086 coreos-metadata[1744]: Feb 13 20:47:10.430 INFO Fetch successful Feb 13 20:47:10.484206 bash[1835]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:47:10.486631 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:47:10.529418 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1820) Feb 13 20:47:10.525441 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:47:10.538451 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:47:10.545587 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:47:10.649752 sshd_keygen[1778]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:47:10.700337 locksmithd[1810]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:47:10.722673 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:47:10.741605 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:47:10.759587 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 20:47:10.790846 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:47:10.791172 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:47:10.804495 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:47:10.820348 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 20:47:10.832215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:47:10.847439 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:47:10.865305 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:47:10.868354 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:47:11.186409 tar[1782]: linux-amd64/LICENSE Feb 13 20:47:11.186570 tar[1782]: linux-amd64/README.md Feb 13 20:47:11.200282 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:47:11.728155 containerd[1789]: time="2025-02-13T20:47:11.728069100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:47:11.749988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:47:11.764017 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:47:11.770958 containerd[1789]: time="2025-02-13T20:47:11.770416300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.771882900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.771919300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.771940500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772104100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772123900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772215800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772233200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772470800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772491700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772510300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:47:11.772782 containerd[1789]: time="2025-02-13T20:47:11.772525000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.773230 containerd[1789]: time="2025-02-13T20:47:11.772605700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.773230 containerd[1789]: time="2025-02-13T20:47:11.772854400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:47:11.773230 containerd[1789]: time="2025-02-13T20:47:11.773111900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:47:11.773230 containerd[1789]: time="2025-02-13T20:47:11.773133300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:47:11.773375 containerd[1789]: time="2025-02-13T20:47:11.773269700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:47:11.773375 containerd[1789]: time="2025-02-13T20:47:11.773329700Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809403800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809482500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809510700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809532300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809551800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.809722000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:47:11.810293 containerd[1789]: time="2025-02-13T20:47:11.810164000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810324300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810345400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810361300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810379100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810396600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810412100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810433900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810453800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810473200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810490300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810508300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810536100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810555000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.810624 containerd[1789]: time="2025-02-13T20:47:11.810573700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810591600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810609300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810628600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810645300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810663500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810701500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810717500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810743400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810763600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810786100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810814400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810830900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811088 containerd[1789]: time="2025-02-13T20:47:11.810846500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.810911600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.810936600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.810952200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.810970100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.810987800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.811005500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.811025700Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:47:11.811677 containerd[1789]: time="2025-02-13T20:47:11.811048000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:47:11.811953 containerd[1789]: time="2025-02-13T20:47:11.811597400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:47:11.811953 containerd[1789]: time="2025-02-13T20:47:11.811708000Z" level=info msg="Connect containerd service" Feb 13 20:47:11.811953 containerd[1789]: time="2025-02-13T20:47:11.811756400Z" level=info msg="using legacy CRI server" Feb 13 20:47:11.811953 containerd[1789]: time="2025-02-13T20:47:11.811780000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:47:11.811953 containerd[1789]: time="2025-02-13T20:47:11.811937900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:47:11.815356 containerd[1789]: time="2025-02-13T20:47:11.815235400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815584800Z" level=info msg="Start subscribing containerd event" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815644100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815666300Z" level=info msg="Start recovering state" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815719400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815744300Z" level=info msg="Start event monitor" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815760300Z" level=info msg="Start snapshots syncer" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815771300Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:47:11.822299 containerd[1789]: time="2025-02-13T20:47:11.815782000Z" level=info msg="Start streaming server" Feb 13 20:47:11.815978 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:47:11.819644 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:47:11.824805 systemd[1]: Startup finished in 511ms (firmware) + 31.198s (loader) + 14.568s (kernel) + 16.811s (userspace) = 1min 3.091s. Feb 13 20:47:11.827871 containerd[1789]: time="2025-02-13T20:47:11.827845300Z" level=info msg="containerd successfully booted in 0.100898s" Feb 13 20:47:12.065281 login[1905]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:47:12.069234 login[1906]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:47:12.080583 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:47:12.090232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:47:12.097248 systemd-logind[1766]: New session 2 of user core. Feb 13 20:47:12.104327 systemd-logind[1766]: New session 1 of user core. Feb 13 20:47:12.111568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:47:12.124234 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:47:12.128323 (systemd)[1942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:47:12.364663 systemd[1942]: Queued start job for default target default.target. Feb 13 20:47:12.365174 systemd[1942]: Created slice app.slice - User Application Slice. Feb 13 20:47:12.365229 systemd[1942]: Reached target paths.target - Paths. Feb 13 20:47:12.365247 systemd[1942]: Reached target timers.target - Timers. Feb 13 20:47:12.373135 systemd[1942]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:47:12.382636 systemd[1942]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:47:12.382703 systemd[1942]: Reached target sockets.target - Sockets. Feb 13 20:47:12.382720 systemd[1942]: Reached target basic.target - Basic System. Feb 13 20:47:12.382762 systemd[1942]: Reached target default.target - Main User Target. Feb 13 20:47:12.382796 systemd[1942]: Startup finished in 247ms. Feb 13 20:47:12.383208 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:47:12.390856 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:47:12.394295 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:47:12.545311 kubelet[1925]: E0213 20:47:12.545219 1925 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:47:12.547678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:47:12.547991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:47:13.235471 waagent[1901]: 2025-02-13T20:47:13.235365Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 20:47:13.238819 waagent[1901]: 2025-02-13T20:47:13.238753Z INFO Daemon Daemon OS: flatcar 4081.3.1 Feb 13 20:47:13.241202 waagent[1901]: 2025-02-13T20:47:13.241140Z INFO Daemon Daemon Python: 3.11.9 Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.242491Z INFO Daemon Daemon Run daemon Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.243325Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.1' Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.243685Z INFO Daemon Daemon Using waagent for provisioning Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.244328Z INFO Daemon Daemon Activate resource disk Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.244664Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.248648Z INFO Daemon Daemon Found device: None Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.249225Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.250005Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.252301Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:47:13.258318 waagent[1901]: 2025-02-13T20:47:13.253163Z INFO Daemon Daemon Running default provisioning handler Feb 13 20:47:13.274830 waagent[1901]: 2025-02-13T20:47:13.274750Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 20:47:13.281582 waagent[1901]: 2025-02-13T20:47:13.281532Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 20:47:13.289811 waagent[1901]: 2025-02-13T20:47:13.282964Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 20:47:13.289811 waagent[1901]: 2025-02-13T20:47:13.284048Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 20:47:13.518106 waagent[1901]: 2025-02-13T20:47:13.515405Z INFO Daemon Daemon Successfully mounted dvd Feb 13 20:47:13.528903 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 20:47:13.531669 waagent[1901]: 2025-02-13T20:47:13.531603Z INFO Daemon Daemon Detect protocol endpoint Feb 13 20:47:13.534352 waagent[1901]: 2025-02-13T20:47:13.534298Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:47:13.537149 waagent[1901]: 2025-02-13T20:47:13.537100Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 20:47:13.540112 waagent[1901]: 2025-02-13T20:47:13.540055Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 20:47:13.546695 waagent[1901]: 2025-02-13T20:47:13.541246Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 20:47:13.546695 waagent[1901]: 2025-02-13T20:47:13.541548Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 20:47:13.571259 waagent[1901]: 2025-02-13T20:47:13.571165Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 20:47:13.578829 waagent[1901]: 2025-02-13T20:47:13.573139Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 20:47:13.578829 waagent[1901]: 2025-02-13T20:47:13.573869Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 20:47:13.690993 waagent[1901]: 2025-02-13T20:47:13.690893Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 20:47:13.696873 waagent[1901]: 2025-02-13T20:47:13.692401Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 20:47:13.697113 waagent[1901]: 2025-02-13T20:47:13.697054Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:47:13.710111 waagent[1901]: 2025-02-13T20:47:13.710056Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 20:47:13.725685 waagent[1901]: 2025-02-13T20:47:13.711800Z INFO Daemon Feb 13 20:47:13.725685 waagent[1901]: 2025-02-13T20:47:13.713628Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e92557db-1e3e-471d-afe9-8c1ea725570c eTag: 10779326229469556029 source: Fabric] Feb 13 20:47:13.725685 waagent[1901]: 2025-02-13T20:47:13.715027Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 20:47:13.725685 waagent[1901]: 2025-02-13T20:47:13.716128Z INFO Daemon Feb 13 20:47:13.725685 waagent[1901]: 2025-02-13T20:47:13.716924Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:47:13.729467 waagent[1901]: 2025-02-13T20:47:13.729425Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 20:47:13.873961 waagent[1901]: 2025-02-13T20:47:13.873818Z INFO Daemon Downloaded certificate {'thumbprint': 'AE0AA989C05F0D864A96BF8BF6817583FBAF8168', 'hasPrivateKey': False} Feb 13 20:47:13.879216 waagent[1901]: 2025-02-13T20:47:13.879118Z INFO Daemon Downloaded certificate {'thumbprint': 'A83297A9AE9B811CB93F2EF434F5B97FAD8D8E66', 'hasPrivateKey': True} Feb 13 20:47:13.884572 waagent[1901]: 2025-02-13T20:47:13.884506Z INFO Daemon Fetch goal state completed Feb 13 20:47:13.894531 waagent[1901]: 2025-02-13T20:47:13.894465Z INFO Daemon Daemon Starting provisioning Feb 13 20:47:13.902147 waagent[1901]: 2025-02-13T20:47:13.895768Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 20:47:13.902147 waagent[1901]: 2025-02-13T20:47:13.896567Z INFO Daemon Daemon Set hostname [ci-4081.3.1-a-faf44fbcb5] Feb 13 20:47:13.905774 waagent[1901]: 2025-02-13T20:47:13.905708Z INFO Daemon Daemon Publish hostname [ci-4081.3.1-a-faf44fbcb5] Feb 13 20:47:13.913574 waagent[1901]: 2025-02-13T20:47:13.906979Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 20:47:13.913574 waagent[1901]: 2025-02-13T20:47:13.907943Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 20:47:13.965139 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:47:13.965153 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:47:13.965223 systemd-networkd[1365]: eth0: DHCP lease lost Feb 13 20:47:13.966613 waagent[1901]: 2025-02-13T20:47:13.966498Z INFO Daemon Daemon Create user account if not exists Feb 13 20:47:13.983558 waagent[1901]: 2025-02-13T20:47:13.967876Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 20:47:13.983558 waagent[1901]: 2025-02-13T20:47:13.968858Z INFO Daemon Daemon Configure sudoer Feb 13 20:47:13.983558 waagent[1901]: 2025-02-13T20:47:13.970055Z INFO Daemon Daemon Configure sshd Feb 13 20:47:13.983558 waagent[1901]: 2025-02-13T20:47:13.970961Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 20:47:13.983558 waagent[1901]: 2025-02-13T20:47:13.971741Z INFO Daemon Daemon Deploy ssh public key. Feb 13 20:47:13.984275 systemd-networkd[1365]: eth0: DHCPv6 lease lost Feb 13 20:47:14.014281 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 20:47:15.116585 waagent[1901]: 2025-02-13T20:47:15.116514Z INFO Daemon Daemon Provisioning complete Feb 13 20:47:15.130876 waagent[1901]: 2025-02-13T20:47:15.130803Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 20:47:15.137951 waagent[1901]: 2025-02-13T20:47:15.131980Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 20:47:15.137951 waagent[1901]: 2025-02-13T20:47:15.132791Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 20:47:15.256706 waagent[2004]: 2025-02-13T20:47:15.256604Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 20:47:15.257221 waagent[2004]: 2025-02-13T20:47:15.256764Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.1 Feb 13 20:47:15.257221 waagent[2004]: 2025-02-13T20:47:15.256844Z INFO ExtHandler ExtHandler Python: 3.11.9 Feb 13 20:47:15.327566 waagent[2004]: 2025-02-13T20:47:15.327461Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 20:47:15.327817 waagent[2004]: 2025-02-13T20:47:15.327764Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:47:15.327927 waagent[2004]: 2025-02-13T20:47:15.327878Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:47:15.336291 waagent[2004]: 2025-02-13T20:47:15.336230Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:47:15.341602 waagent[2004]: 2025-02-13T20:47:15.341548Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 20:47:15.342023 waagent[2004]: 2025-02-13T20:47:15.341970Z INFO ExtHandler Feb 13 20:47:15.342104 waagent[2004]: 2025-02-13T20:47:15.342056Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a554c91a-56de-4a25-b6cf-222a039676df eTag: 10779326229469556029 source: Fabric] Feb 13 20:47:15.342431 waagent[2004]: 2025-02-13T20:47:15.342378Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 20:47:15.342966 waagent[2004]: 2025-02-13T20:47:15.342909Z INFO ExtHandler Feb 13 20:47:15.343043 waagent[2004]: 2025-02-13T20:47:15.342990Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:47:15.346719 waagent[2004]: 2025-02-13T20:47:15.346676Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 20:47:15.413509 waagent[2004]: 2025-02-13T20:47:15.413428Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AE0AA989C05F0D864A96BF8BF6817583FBAF8168', 'hasPrivateKey': False} Feb 13 20:47:15.413914 waagent[2004]: 2025-02-13T20:47:15.413864Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A83297A9AE9B811CB93F2EF434F5B97FAD8D8E66', 'hasPrivateKey': True} Feb 13 20:47:15.414364 waagent[2004]: 2025-02-13T20:47:15.414313Z INFO ExtHandler Fetch goal state completed Feb 13 20:47:15.430134 waagent[2004]: 2025-02-13T20:47:15.430067Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2004 Feb 13 20:47:15.430310 waagent[2004]: 2025-02-13T20:47:15.430258Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 20:47:15.431893 waagent[2004]: 2025-02-13T20:47:15.431834Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 20:47:15.432286 waagent[2004]: 2025-02-13T20:47:15.432232Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 20:47:15.496675 waagent[2004]: 2025-02-13T20:47:15.496617Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 20:47:15.496947 waagent[2004]: 2025-02-13T20:47:15.496888Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 20:47:15.503585 waagent[2004]: 2025-02-13T20:47:15.503535Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 20:47:15.510506 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit waagent.service)... Feb 13 20:47:15.510523 systemd[1]: Reloading... Feb 13 20:47:15.579281 zram_generator::config[2049]: No configuration found. Feb 13 20:47:15.717872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:47:15.793272 systemd[1]: Reloading finished in 282 ms. Feb 13 20:47:15.815938 waagent[2004]: 2025-02-13T20:47:15.814544Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 20:47:15.823195 systemd[1]: Reloading requested from client PID 2115 ('systemctl') (unit waagent.service)... Feb 13 20:47:15.823211 systemd[1]: Reloading... Feb 13 20:47:15.884252 zram_generator::config[2145]: No configuration found. Feb 13 20:47:16.022014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:47:16.097816 systemd[1]: Reloading finished in 274 ms. Feb 13 20:47:16.124213 waagent[2004]: 2025-02-13T20:47:16.123744Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 20:47:16.124213 waagent[2004]: 2025-02-13T20:47:16.123936Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 20:47:16.804780 waagent[2004]: 2025-02-13T20:47:16.804677Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 20:47:16.807482 waagent[2004]: 2025-02-13T20:47:16.807400Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 20:47:16.809143 waagent[2004]: 2025-02-13T20:47:16.809065Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 20:47:16.809886 waagent[2004]: 2025-02-13T20:47:16.809818Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 20:47:16.809971 waagent[2004]: 2025-02-13T20:47:16.809924Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:47:16.810098 waagent[2004]: 2025-02-13T20:47:16.810043Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:47:16.810208 waagent[2004]: 2025-02-13T20:47:16.810148Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:47:16.810344 waagent[2004]: 2025-02-13T20:47:16.810288Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:47:16.810805 waagent[2004]: 2025-02-13T20:47:16.810737Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 20:47:16.810996 waagent[2004]: 2025-02-13T20:47:16.810942Z INFO EnvHandler ExtHandler Configure routes Feb 13 20:47:16.811173 waagent[2004]: 2025-02-13T20:47:16.811129Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 20:47:16.811382 waagent[2004]: 2025-02-13T20:47:16.811331Z INFO EnvHandler ExtHandler Gateway:None Feb 13 20:47:16.811737 waagent[2004]: 2025-02-13T20:47:16.811688Z INFO EnvHandler ExtHandler Routes:None Feb 13 20:47:16.812000 waagent[2004]: 2025-02-13T20:47:16.811943Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 20:47:16.812875 waagent[2004]: 2025-02-13T20:47:16.812809Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 20:47:16.813097 waagent[2004]: 2025-02-13T20:47:16.813034Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 20:47:16.813918 waagent[2004]: 2025-02-13T20:47:16.813848Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 20:47:16.813918 waagent[2004]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 20:47:16.813918 waagent[2004]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 20:47:16.813918 waagent[2004]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 20:47:16.813918 waagent[2004]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:47:16.813918 waagent[2004]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:47:16.813918 waagent[2004]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:47:16.814295 waagent[2004]: 2025-02-13T20:47:16.814018Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 20:47:16.823207 waagent[2004]: 2025-02-13T20:47:16.822563Z INFO ExtHandler ExtHandler Feb 13 20:47:16.823207 waagent[2004]: 2025-02-13T20:47:16.822661Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1d72c1f9-aa54-4d4b-9a49-1787c5f085e0 correlation aca507b6-9f7c-4b88-80cc-b0770838baae created: 2025-02-13T20:45:57.622360Z] Feb 13 20:47:16.823207 waagent[2004]: 2025-02-13T20:47:16.823082Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 20:47:16.823904 waagent[2004]: 2025-02-13T20:47:16.823853Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 20:47:16.861619 waagent[2004]: 2025-02-13T20:47:16.861470Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 161BAE41-0397-46F9-82D2-CA2B56A329BA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 20:47:16.904137 waagent[2004]: 2025-02-13T20:47:16.904056Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 20:47:16.904137 waagent[2004]: Executing ['ip', '-a', '-o', 'link']: Feb 13 20:47:16.904137 waagent[2004]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 20:47:16.904137 waagent[2004]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:9d:a2 brd ff:ff:ff:ff:ff:ff Feb 13 20:47:16.904137 waagent[2004]: 3: enP1269s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:9d:a2 brd ff:ff:ff:ff:ff:ff\ altname enP1269p0s2 Feb 13 20:47:16.904137 waagent[2004]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 20:47:16.904137 waagent[2004]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 20:47:16.904137 waagent[2004]: 2: eth0 inet 10.200.8.38/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 20:47:16.904137 waagent[2004]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 20:47:16.904137 waagent[2004]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 20:47:16.904137 waagent[2004]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:9da2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:47:16.904137 waagent[2004]: 3: enP1269s1 inet6 fe80::7e1e:52ff:fe1f:9da2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:47:17.028710 waagent[2004]: 2025-02-13T20:47:17.028629Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 20:47:17.028710 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.028710 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.028710 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.028710 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.028710 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.028710 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.028710 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:47:17.028710 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:47:17.028710 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:47:17.032014 waagent[2004]: 2025-02-13T20:47:17.031955Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 20:47:17.032014 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.032014 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.032014 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.032014 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.032014 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:47:17.032014 waagent[2004]: pkts bytes target prot opt in out source destination Feb 13 20:47:17.032014 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:47:17.032014 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:47:17.032014 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:47:17.032410 waagent[2004]: 2025-02-13T20:47:17.032278Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 20:47:22.655009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:47:22.661428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:47:22.759357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:47:22.763455 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:47:23.390051 kubelet[2254]: E0213 20:47:23.389994 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:47:23.393907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:47:23.394247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:47:33.405145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:47:33.412399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:47:33.514356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:47:33.525597 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:47:33.877985 chronyd[1765]: Selected source PHC0 Feb 13 20:47:34.084946 kubelet[2275]: E0213 20:47:34.084887 2275 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:47:34.087546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:47:34.087871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:47:44.155173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:47:44.162421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:47:44.511353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:47:44.515168 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:47:44.828128 kubelet[2296]: E0213 20:47:44.827970 2296 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:47:44.830520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:47:44.830839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:47:49.781235 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 20:47:54.905691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:47:54.911427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:47:55.269448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:47:55.270339 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:47:55.307546 kubelet[2316]: E0213 20:47:55.307453 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:47:55.310074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:47:55.310431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:47:55.446434 update_engine[1774]: I20250213 20:47:55.446313 1774 update_attempter.cc:509] Updating boot flags... Feb 13 20:47:55.618248 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2337) Feb 13 20:47:55.729291 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2341) Feb 13 20:48:05.404966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:48:05.410439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:05.760391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:05.762466 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:48:05.799766 kubelet[2403]: E0213 20:48:05.799729 2403 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:48:05.802365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:48:05.802674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:48:06.216861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:48:06.222665 systemd[1]: Started sshd@0-10.200.8.38:22-10.200.16.10:39326.service - OpenSSH per-connection server daemon (10.200.16.10:39326). Feb 13 20:48:06.855469 sshd[2412]: Accepted publickey for core from 10.200.16.10 port 39326 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:06.857244 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:06.861308 systemd-logind[1766]: New session 3 of user core. Feb 13 20:48:06.868424 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:48:07.401533 systemd[1]: Started sshd@1-10.200.8.38:22-10.200.16.10:39336.service - OpenSSH per-connection server daemon (10.200.16.10:39336). Feb 13 20:48:08.017600 sshd[2417]: Accepted publickey for core from 10.200.16.10 port 39336 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:08.019173 sshd[2417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:08.023706 systemd-logind[1766]: New session 4 of user core. Feb 13 20:48:08.034558 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:48:08.462614 sshd[2417]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:08.466275 systemd[1]: sshd@1-10.200.8.38:22-10.200.16.10:39336.service: Deactivated successfully. Feb 13 20:48:08.472102 systemd-logind[1766]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:48:08.472468 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:48:08.473712 systemd-logind[1766]: Removed session 4. Feb 13 20:48:08.571987 systemd[1]: Started sshd@2-10.200.8.38:22-10.200.16.10:39350.service - OpenSSH per-connection server daemon (10.200.16.10:39350). Feb 13 20:48:09.191057 sshd[2425]: Accepted publickey for core from 10.200.16.10 port 39350 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:09.192808 sshd[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:09.198383 systemd-logind[1766]: New session 5 of user core. Feb 13 20:48:09.207414 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:48:09.630449 sshd[2425]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:09.633565 systemd[1]: sshd@2-10.200.8.38:22-10.200.16.10:39350.service: Deactivated successfully. Feb 13 20:48:09.637388 systemd-logind[1766]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:48:09.638546 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:48:09.640775 systemd-logind[1766]: Removed session 5. Feb 13 20:48:09.742744 systemd[1]: Started sshd@3-10.200.8.38:22-10.200.16.10:55174.service - OpenSSH per-connection server daemon (10.200.16.10:55174). Feb 13 20:48:10.360365 sshd[2433]: Accepted publickey for core from 10.200.16.10 port 55174 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:10.362054 sshd[2433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:10.367534 systemd-logind[1766]: New session 6 of user core. Feb 13 20:48:10.372490 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:48:10.804124 sshd[2433]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:10.808819 systemd[1]: sshd@3-10.200.8.38:22-10.200.16.10:55174.service: Deactivated successfully. Feb 13 20:48:10.812588 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:48:10.813465 systemd-logind[1766]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:48:10.814374 systemd-logind[1766]: Removed session 6. Feb 13 20:48:10.916675 systemd[1]: Started sshd@4-10.200.8.38:22-10.200.16.10:55184.service - OpenSSH per-connection server daemon (10.200.16.10:55184). Feb 13 20:48:11.534607 sshd[2441]: Accepted publickey for core from 10.200.16.10 port 55184 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:11.536360 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:11.540954 systemd-logind[1766]: New session 7 of user core. Feb 13 20:48:11.547613 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:48:11.901173 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:48:11.901547 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:48:11.918553 sudo[2445]: pam_unix(sudo:session): session closed for user root Feb 13 20:48:12.019983 sshd[2441]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:12.025868 systemd[1]: sshd@4-10.200.8.38:22-10.200.16.10:55184.service: Deactivated successfully. Feb 13 20:48:12.030122 systemd-logind[1766]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:48:12.030426 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:48:12.031849 systemd-logind[1766]: Removed session 7. Feb 13 20:48:12.133481 systemd[1]: Started sshd@5-10.200.8.38:22-10.200.16.10:55192.service - OpenSSH per-connection server daemon (10.200.16.10:55192). Feb 13 20:48:12.753034 sshd[2450]: Accepted publickey for core from 10.200.16.10 port 55192 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:12.754817 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:12.760156 systemd-logind[1766]: New session 8 of user core. Feb 13 20:48:12.769492 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:48:13.098430 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:48:13.098785 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:48:13.102303 sudo[2455]: pam_unix(sudo:session): session closed for user root Feb 13 20:48:13.107126 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:48:13.107501 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:48:13.126274 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:48:13.127594 auditctl[2458]: No rules Feb 13 20:48:13.128976 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:48:13.129996 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:48:13.133125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:48:13.157910 augenrules[2477]: No rules Feb 13 20:48:13.158663 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:48:13.160773 sudo[2454]: pam_unix(sudo:session): session closed for user root Feb 13 20:48:13.262256 sshd[2450]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:13.268455 systemd[1]: sshd@5-10.200.8.38:22-10.200.16.10:55192.service: Deactivated successfully. Feb 13 20:48:13.271867 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:48:13.272599 systemd-logind[1766]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:48:13.273472 systemd-logind[1766]: Removed session 8. Feb 13 20:48:13.375661 systemd[1]: Started sshd@6-10.200.8.38:22-10.200.16.10:55202.service - OpenSSH per-connection server daemon (10.200.16.10:55202). Feb 13 20:48:13.993966 sshd[2486]: Accepted publickey for core from 10.200.16.10 port 55202 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:48:13.995690 sshd[2486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:14.001117 systemd-logind[1766]: New session 9 of user core. Feb 13 20:48:14.010418 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:48:14.338391 sudo[2490]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:48:14.338851 sudo[2490]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:48:14.786709 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:48:14.787616 (dockerd)[2505]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:48:15.283495 dockerd[2505]: time="2025-02-13T20:48:15.283432329Z" level=info msg="Starting up" Feb 13 20:48:15.487441 systemd[1]: var-lib-docker-metacopy\x2dcheck2629331531-merged.mount: Deactivated successfully. Feb 13 20:48:15.506441 dockerd[2505]: time="2025-02-13T20:48:15.506395341Z" level=info msg="Loading containers: start." Feb 13 20:48:15.601218 kernel: Initializing XFRM netlink socket Feb 13 20:48:15.669800 systemd-networkd[1365]: docker0: Link UP Feb 13 20:48:15.700328 dockerd[2505]: time="2025-02-13T20:48:15.700289451Z" level=info msg="Loading containers: done." Feb 13 20:48:15.730027 dockerd[2505]: time="2025-02-13T20:48:15.729974852Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:48:15.730224 dockerd[2505]: time="2025-02-13T20:48:15.730089852Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:48:15.730281 dockerd[2505]: time="2025-02-13T20:48:15.730231352Z" level=info msg="Daemon has completed initialization" Feb 13 20:48:15.781132 dockerd[2505]: time="2025-02-13T20:48:15.780522855Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:48:15.780804 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:48:15.904979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:48:15.914233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:16.082357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:16.085443 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:48:16.586978 kubelet[2652]: E0213 20:48:16.586832 2652 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:48:16.590045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:48:16.590377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:48:18.618350 containerd[1789]: time="2025-02-13T20:48:18.618311560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:48:19.466795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585927482.mount: Deactivated successfully. Feb 13 20:48:21.051454 containerd[1789]: time="2025-02-13T20:48:21.051400042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:21.054773 containerd[1789]: time="2025-02-13T20:48:21.054711163Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678222" Feb 13 20:48:21.057989 containerd[1789]: time="2025-02-13T20:48:21.057934084Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:21.061885 containerd[1789]: time="2025-02-13T20:48:21.061827610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:21.063099 containerd[1789]: time="2025-02-13T20:48:21.062797116Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.444445656s" Feb 13 20:48:21.063099 containerd[1789]: time="2025-02-13T20:48:21.062837316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:48:21.085120 containerd[1789]: time="2025-02-13T20:48:21.085075361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:48:22.801348 containerd[1789]: time="2025-02-13T20:48:22.801294834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:22.804968 containerd[1789]: time="2025-02-13T20:48:22.804819657Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611553" Feb 13 20:48:22.810020 containerd[1789]: time="2025-02-13T20:48:22.809969989Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:22.815726 containerd[1789]: time="2025-02-13T20:48:22.815534925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:22.817098 containerd[1789]: time="2025-02-13T20:48:22.816917734Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.731740872s" Feb 13 20:48:22.817098 containerd[1789]: time="2025-02-13T20:48:22.816956334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:48:22.839431 containerd[1789]: time="2025-02-13T20:48:22.839391877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:48:24.000241 containerd[1789]: time="2025-02-13T20:48:24.000166876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:24.003560 containerd[1789]: time="2025-02-13T20:48:24.003418297Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782138" Feb 13 20:48:24.007298 containerd[1789]: time="2025-02-13T20:48:24.007099920Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:24.012995 containerd[1789]: time="2025-02-13T20:48:24.012945457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:24.014071 containerd[1789]: time="2025-02-13T20:48:24.013910163Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.174480386s" Feb 13 20:48:24.014071 containerd[1789]: time="2025-02-13T20:48:24.013947364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:48:24.036376 containerd[1789]: time="2025-02-13T20:48:24.036341806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:48:25.041436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407113659.mount: Deactivated successfully. Feb 13 20:48:25.490056 containerd[1789]: time="2025-02-13T20:48:25.490003072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:25.492797 containerd[1789]: time="2025-02-13T20:48:25.492745790Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 13 20:48:25.495921 containerd[1789]: time="2025-02-13T20:48:25.495868510Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:25.500300 containerd[1789]: time="2025-02-13T20:48:25.500245738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:25.501135 containerd[1789]: time="2025-02-13T20:48:25.500975242Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.464588135s" Feb 13 20:48:25.501135 containerd[1789]: time="2025-02-13T20:48:25.501017743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:48:25.524078 containerd[1789]: time="2025-02-13T20:48:25.523898288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:48:26.113950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444127279.mount: Deactivated successfully. Feb 13 20:48:26.654779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:48:26.662412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:26.772363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:26.776173 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:48:26.813368 kubelet[2796]: E0213 20:48:26.813311 2796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:48:26.815953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:48:26.816300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:48:27.807281 containerd[1789]: time="2025-02-13T20:48:27.807232043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:27.813885 containerd[1789]: time="2025-02-13T20:48:27.813743984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 20:48:27.818044 containerd[1789]: time="2025-02-13T20:48:27.817994211Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:27.822632 containerd[1789]: time="2025-02-13T20:48:27.822522040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:27.824466 containerd[1789]: time="2025-02-13T20:48:27.824324452Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.300386663s" Feb 13 20:48:27.824466 containerd[1789]: time="2025-02-13T20:48:27.824363552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:48:27.847773 containerd[1789]: time="2025-02-13T20:48:27.847741101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:48:28.422738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286201044.mount: Deactivated successfully. Feb 13 20:48:28.438641 containerd[1789]: time="2025-02-13T20:48:28.438524467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:28.440732 containerd[1789]: time="2025-02-13T20:48:28.440675580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 13 20:48:28.444115 containerd[1789]: time="2025-02-13T20:48:28.443990802Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:28.448347 containerd[1789]: time="2025-02-13T20:48:28.448297329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:28.449113 containerd[1789]: time="2025-02-13T20:48:28.448963833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 601.080231ms" Feb 13 20:48:28.449113 containerd[1789]: time="2025-02-13T20:48:28.449001233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:48:28.470111 containerd[1789]: time="2025-02-13T20:48:28.469894467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:48:29.060272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810890898.mount: Deactivated successfully. Feb 13 20:48:31.084476 containerd[1789]: time="2025-02-13T20:48:31.084359603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:31.086985 containerd[1789]: time="2025-02-13T20:48:31.086813018Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Feb 13 20:48:31.091810 containerd[1789]: time="2025-02-13T20:48:31.091663748Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:31.096288 containerd[1789]: time="2025-02-13T20:48:31.096238975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:48:31.097294 containerd[1789]: time="2025-02-13T20:48:31.097259482Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.627329415s" Feb 13 20:48:31.097376 containerd[1789]: time="2025-02-13T20:48:31.097300582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:48:34.654266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:34.660432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:34.688850 systemd[1]: Reloading requested from client PID 2940 ('systemctl') (unit session-9.scope)... Feb 13 20:48:34.688866 systemd[1]: Reloading... Feb 13 20:48:34.772840 zram_generator::config[2980]: No configuration found. Feb 13 20:48:34.924415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:48:35.010231 systemd[1]: Reloading finished in 320 ms. Feb 13 20:48:35.059098 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:48:35.059232 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:48:35.059608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:35.066482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:35.330353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:35.334822 (kubelet)[3062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:48:35.372269 kubelet[3062]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:48:35.372269 kubelet[3062]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:48:35.372269 kubelet[3062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:48:35.962995 kubelet[3062]: I0213 20:48:35.962106 3062 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:48:36.235830 kubelet[3062]: I0213 20:48:36.235574 3062 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:48:36.235830 kubelet[3062]: I0213 20:48:36.235604 3062 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:48:36.236034 kubelet[3062]: I0213 20:48:36.236011 3062 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:48:36.252379 kubelet[3062]: I0213 20:48:36.252074 3062 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:48:36.253390 kubelet[3062]: E0213 20:48:36.252778 3062 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.265669 kubelet[3062]: I0213 20:48:36.265646 3062 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:48:36.266104 kubelet[3062]: I0213 20:48:36.266066 3062 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:48:36.266309 kubelet[3062]: I0213 20:48:36.266100 3062 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-faf44fbcb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:48:36.266866 kubelet[3062]: I0213 20:48:36.266846 3062 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:48:36.266937 kubelet[3062]: I0213 20:48:36.266872 3062 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:48:36.267051 kubelet[3062]: I0213 20:48:36.267034 3062 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:48:36.267743 kubelet[3062]: I0213 20:48:36.267725 3062 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:48:36.267743 kubelet[3062]: I0213 20:48:36.267746 3062 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:48:36.267859 kubelet[3062]: I0213 20:48:36.267771 3062 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:48:36.267859 kubelet[3062]: I0213 20:48:36.267788 3062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:48:36.273171 kubelet[3062]: W0213 20:48:36.272385 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.273171 kubelet[3062]: E0213 20:48:36.272443 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.273171 kubelet[3062]: W0213 20:48:36.272527 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-faf44fbcb5&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.273171 kubelet[3062]: E0213 20:48:36.272565 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-faf44fbcb5&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.273171 kubelet[3062]: I0213 20:48:36.272924 3062 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:48:36.275246 kubelet[3062]: I0213 20:48:36.274431 3062 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:48:36.275246 kubelet[3062]: W0213 20:48:36.274494 3062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:48:36.275366 kubelet[3062]: I0213 20:48:36.275319 3062 server.go:1264] "Started kubelet" Feb 13 20:48:36.280090 kubelet[3062]: I0213 20:48:36.279997 3062 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:48:36.281994 kubelet[3062]: I0213 20:48:36.281065 3062 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:48:36.281994 kubelet[3062]: I0213 20:48:36.281264 3062 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:48:36.281994 kubelet[3062]: I0213 20:48:36.281668 3062 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:48:36.281994 kubelet[3062]: E0213 20:48:36.281807 3062 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-faf44fbcb5.1823df95f695df67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-faf44fbcb5,UID:ci-4081.3.1-a-faf44fbcb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-faf44fbcb5,},FirstTimestamp:2025-02-13 20:48:36.275281767 +0000 UTC m=+0.936469495,LastTimestamp:2025-02-13 20:48:36.275281767 +0000 UTC m=+0.936469495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-faf44fbcb5,}" Feb 13 20:48:36.283752 kubelet[3062]: I0213 20:48:36.283728 3062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:48:36.288029 kubelet[3062]: E0213 20:48:36.288008 3062 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:48:36.288800 kubelet[3062]: E0213 20:48:36.288786 3062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-faf44fbcb5\" not found" Feb 13 20:48:36.288943 kubelet[3062]: I0213 20:48:36.288932 3062 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:48:36.289125 kubelet[3062]: I0213 20:48:36.289113 3062 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:48:36.289668 kubelet[3062]: I0213 20:48:36.289263 3062 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:48:36.289668 kubelet[3062]: W0213 20:48:36.289567 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.289668 kubelet[3062]: E0213 20:48:36.289616 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.290359 kubelet[3062]: E0213 20:48:36.290321 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-faf44fbcb5?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="200ms" Feb 13 20:48:36.291032 kubelet[3062]: I0213 20:48:36.291015 3062 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:48:36.291223 kubelet[3062]: I0213 20:48:36.291205 3062 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:48:36.292729 kubelet[3062]: I0213 20:48:36.292714 3062 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:48:36.317632 kubelet[3062]: I0213 20:48:36.317588 3062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:48:36.319354 kubelet[3062]: I0213 20:48:36.318964 3062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:48:36.319354 kubelet[3062]: I0213 20:48:36.319004 3062 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:48:36.319354 kubelet[3062]: I0213 20:48:36.319021 3062 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:48:36.319354 kubelet[3062]: E0213 20:48:36.319060 3062 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:48:36.326057 kubelet[3062]: W0213 20:48:36.326001 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.326145 kubelet[3062]: E0213 20:48:36.326071 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:36.420489 kubelet[3062]: E0213 20:48:36.420222 3062 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:48:36.482039 kubelet[3062]: I0213 20:48:36.481816 3062 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.482490 kubelet[3062]: E0213 20:48:36.482415 3062 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.482610 kubelet[3062]: I0213 20:48:36.482543 3062 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:48:36.482610 kubelet[3062]: I0213 20:48:36.482553 3062 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:48:36.482610 kubelet[3062]: I0213 20:48:36.482572 3062 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:48:36.487301 kubelet[3062]: I0213 20:48:36.487230 3062 policy_none.go:49] "None policy: Start" Feb 13 20:48:36.488274 kubelet[3062]: I0213 20:48:36.487797 3062 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:48:36.488274 kubelet[3062]: I0213 20:48:36.487882 3062 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:48:36.491309 kubelet[3062]: E0213 20:48:36.491280 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-faf44fbcb5?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="400ms" Feb 13 20:48:36.498986 kubelet[3062]: I0213 20:48:36.498965 3062 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:48:36.499282 kubelet[3062]: I0213 20:48:36.499168 3062 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:48:36.499342 kubelet[3062]: I0213 20:48:36.499308 3062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:48:36.501596 kubelet[3062]: E0213 20:48:36.501574 3062 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-faf44fbcb5\" not found" Feb 13 20:48:36.621144 kubelet[3062]: I0213 20:48:36.621063 3062 topology_manager.go:215] "Topology Admit Handler" podUID="8b9f01879f02fb5e4ae2ff07cea90359" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.622924 kubelet[3062]: I0213 20:48:36.622894 3062 topology_manager.go:215] "Topology Admit Handler" podUID="c40c1527ea00ae616b4ace0381bcc2f4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.624496 kubelet[3062]: I0213 20:48:36.624310 3062 topology_manager.go:215] "Topology Admit Handler" podUID="f1ca971aca7b1884cec9bb19b84cadf6" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.685233 kubelet[3062]: I0213 20:48:36.685205 3062 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.685590 kubelet[3062]: E0213 20:48:36.685563 3062 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692263 kubelet[3062]: I0213 20:48:36.692158 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692471 kubelet[3062]: I0213 20:48:36.692358 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692471 kubelet[3062]: I0213 20:48:36.692464 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692574 kubelet[3062]: I0213 20:48:36.692501 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692574 kubelet[3062]: I0213 20:48:36.692536 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1ca971aca7b1884cec9bb19b84cadf6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-faf44fbcb5\" (UID: \"f1ca971aca7b1884cec9bb19b84cadf6\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692574 kubelet[3062]: I0213 20:48:36.692563 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692717 kubelet[3062]: I0213 20:48:36.692597 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692717 kubelet[3062]: I0213 20:48:36.692622 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.692717 kubelet[3062]: I0213 20:48:36.692646 3062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:36.821735 kubelet[3062]: E0213 20:48:36.821519 3062 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-faf44fbcb5.1823df95f695df67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-faf44fbcb5,UID:ci-4081.3.1-a-faf44fbcb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-faf44fbcb5,},FirstTimestamp:2025-02-13 20:48:36.275281767 +0000 UTC m=+0.936469495,LastTimestamp:2025-02-13 20:48:36.275281767 +0000 UTC m=+0.936469495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-faf44fbcb5,}" Feb 13 20:48:36.892458 kubelet[3062]: E0213 20:48:36.892371 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-faf44fbcb5?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="800ms" Feb 13 20:48:36.929298 containerd[1789]: time="2025-02-13T20:48:36.928925342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-faf44fbcb5,Uid:8b9f01879f02fb5e4ae2ff07cea90359,Namespace:kube-system,Attempt:0,}" Feb 13 20:48:36.929298 containerd[1789]: time="2025-02-13T20:48:36.929087843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-faf44fbcb5,Uid:c40c1527ea00ae616b4ace0381bcc2f4,Namespace:kube-system,Attempt:0,}" Feb 13 20:48:36.933238 containerd[1789]: time="2025-02-13T20:48:36.933049167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-faf44fbcb5,Uid:f1ca971aca7b1884cec9bb19b84cadf6,Namespace:kube-system,Attempt:0,}" Feb 13 20:48:37.087776 kubelet[3062]: I0213 20:48:37.087673 3062 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:37.088137 kubelet[3062]: E0213 20:48:37.088046 3062 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:37.324086 kubelet[3062]: W0213 20:48:37.324019 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.324086 kubelet[3062]: E0213 20:48:37.324092 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.474367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343233986.mount: Deactivated successfully. Feb 13 20:48:37.484231 kubelet[3062]: W0213 20:48:37.484196 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.484544 kubelet[3062]: E0213 20:48:37.484238 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.506596 containerd[1789]: time="2025-02-13T20:48:37.506551854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:48:37.509992 containerd[1789]: time="2025-02-13T20:48:37.509941375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 20:48:37.512996 containerd[1789]: time="2025-02-13T20:48:37.512962093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:48:37.516451 containerd[1789]: time="2025-02-13T20:48:37.516418714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:48:37.519032 containerd[1789]: time="2025-02-13T20:48:37.518983930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:48:37.522954 containerd[1789]: time="2025-02-13T20:48:37.522916254Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:48:37.526135 containerd[1789]: time="2025-02-13T20:48:37.525870671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:48:37.530678 containerd[1789]: time="2025-02-13T20:48:37.530647401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:48:37.531418 containerd[1789]: time="2025-02-13T20:48:37.531383805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.239662ms" Feb 13 20:48:37.532839 containerd[1789]: time="2025-02-13T20:48:37.532804414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.688647ms" Feb 13 20:48:37.537804 containerd[1789]: time="2025-02-13T20:48:37.537770344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.746602ms" Feb 13 20:48:37.562141 kubelet[3062]: W0213 20:48:37.562087 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.562141 kubelet[3062]: E0213 20:48:37.562147 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.693198 kubelet[3062]: E0213 20:48:37.693137 3062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-faf44fbcb5?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="1.6s" Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.790126278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.790334380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.790642581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.795842413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.795983614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:48:37.796455 containerd[1789]: time="2025-02-13T20:48:37.796100615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.796876 containerd[1789]: time="2025-02-13T20:48:37.796298516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.799380 containerd[1789]: time="2025-02-13T20:48:37.799310234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.801201 containerd[1789]: time="2025-02-13T20:48:37.800642442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:48:37.801201 containerd[1789]: time="2025-02-13T20:48:37.800683743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:48:37.801201 containerd[1789]: time="2025-02-13T20:48:37.800696543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.801201 containerd[1789]: time="2025-02-13T20:48:37.800771343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:37.859761 kubelet[3062]: W0213 20:48:37.859693 3062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-faf44fbcb5&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.859761 kubelet[3062]: E0213 20:48:37.859766 3062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-faf44fbcb5&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 13 20:48:37.893511 kubelet[3062]: I0213 20:48:37.893472 3062 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:37.894005 kubelet[3062]: E0213 20:48:37.893967 3062 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:37.901272 containerd[1789]: time="2025-02-13T20:48:37.900583050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-faf44fbcb5,Uid:f1ca971aca7b1884cec9bb19b84cadf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0439091ed7b70b3df317580f48b0f2c304e84ae037eaaaee2d95447dcc67a11f\"" Feb 13 20:48:37.912978 containerd[1789]: time="2025-02-13T20:48:37.911833218Z" level=info msg="CreateContainer within sandbox \"0439091ed7b70b3df317580f48b0f2c304e84ae037eaaaee2d95447dcc67a11f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:48:37.912978 containerd[1789]: time="2025-02-13T20:48:37.912704624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-faf44fbcb5,Uid:8b9f01879f02fb5e4ae2ff07cea90359,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca4dc5d163ab4c6532e4f04bffc5d30ca4f80eaade6ebbca053f4ece7c56fd1a\"" Feb 13 20:48:37.916232 containerd[1789]: time="2025-02-13T20:48:37.916155845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-faf44fbcb5,Uid:c40c1527ea00ae616b4ace0381bcc2f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d532bf46bb43074fee974b6ae1cad3944291d20f976eaf387e86bb0bcf2a2dcc\"" Feb 13 20:48:37.917202 containerd[1789]: time="2025-02-13T20:48:37.917161551Z" level=info msg="CreateContainer within sandbox \"ca4dc5d163ab4c6532e4f04bffc5d30ca4f80eaade6ebbca053f4ece7c56fd1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:48:37.920813 containerd[1789]: time="2025-02-13T20:48:37.920780673Z" level=info msg="CreateContainer within sandbox \"d532bf46bb43074fee974b6ae1cad3944291d20f976eaf387e86bb0bcf2a2dcc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:48:37.988557 containerd[1789]: time="2025-02-13T20:48:37.988508974Z" level=info msg="CreateContainer within sandbox \"d532bf46bb43074fee974b6ae1cad3944291d20f976eaf387e86bb0bcf2a2dcc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85a1c93040a70adf0cf4a7b8110af255eeb580cc695f1b829cd73ee5cecfb872\"" Feb 13 20:48:37.991905 containerd[1789]: time="2025-02-13T20:48:37.991868191Z" level=info msg="CreateContainer within sandbox \"0439091ed7b70b3df317580f48b0f2c304e84ae037eaaaee2d95447dcc67a11f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ac1814aa677f4166472d884a7d260d3d4745e67fe175216070e3bd3c9e86e79\"" Feb 13 20:48:37.992116 containerd[1789]: time="2025-02-13T20:48:37.992090792Z" level=info msg="StartContainer for \"85a1c93040a70adf0cf4a7b8110af255eeb580cc695f1b829cd73ee5cecfb872\"" Feb 13 20:48:37.997200 containerd[1789]: time="2025-02-13T20:48:37.995882711Z" level=info msg="CreateContainer within sandbox \"ca4dc5d163ab4c6532e4f04bffc5d30ca4f80eaade6ebbca053f4ece7c56fd1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd9489473a56c303e071dbb8f90b7d61cf3ce6a4a17dd7849f694e7468b24d35\"" Feb 13 20:48:37.997200 containerd[1789]: time="2025-02-13T20:48:37.996490714Z" level=info msg="StartContainer for \"9ac1814aa677f4166472d884a7d260d3d4745e67fe175216070e3bd3c9e86e79\"" Feb 13 20:48:38.004169 containerd[1789]: time="2025-02-13T20:48:38.004137053Z" level=info msg="StartContainer for \"bd9489473a56c303e071dbb8f90b7d61cf3ce6a4a17dd7849f694e7468b24d35\"" Feb 13 20:48:38.125788 containerd[1789]: time="2025-02-13T20:48:38.125668368Z" level=info msg="StartContainer for \"85a1c93040a70adf0cf4a7b8110af255eeb580cc695f1b829cd73ee5cecfb872\" returns successfully" Feb 13 20:48:38.162576 containerd[1789]: time="2025-02-13T20:48:38.162534454Z" level=info msg="StartContainer for \"bd9489473a56c303e071dbb8f90b7d61cf3ce6a4a17dd7849f694e7468b24d35\" returns successfully" Feb 13 20:48:38.197204 containerd[1789]: time="2025-02-13T20:48:38.196533526Z" level=info msg="StartContainer for \"9ac1814aa677f4166472d884a7d260d3d4745e67fe175216070e3bd3c9e86e79\" returns successfully" Feb 13 20:48:39.499437 kubelet[3062]: I0213 20:48:39.499400 3062 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:39.887584 kubelet[3062]: E0213 20:48:39.887464 3062 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-faf44fbcb5\" not found" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:40.052339 kubelet[3062]: I0213 20:48:40.052300 3062 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:40.133212 kubelet[3062]: E0213 20:48:40.132533 3062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-faf44fbcb5\" not found" Feb 13 20:48:40.276302 kubelet[3062]: I0213 20:48:40.276257 3062 apiserver.go:52] "Watching apiserver" Feb 13 20:48:40.290321 kubelet[3062]: I0213 20:48:40.290252 3062 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:48:41.902020 systemd[1]: Reloading requested from client PID 3327 ('systemctl') (unit session-9.scope)... Feb 13 20:48:41.902036 systemd[1]: Reloading... Feb 13 20:48:41.995209 zram_generator::config[3368]: No configuration found. Feb 13 20:48:42.119739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:48:42.200792 systemd[1]: Reloading finished in 298 ms. Feb 13 20:48:42.231920 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:42.232679 kubelet[3062]: I0213 20:48:42.232031 3062 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:48:42.248570 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:48:42.249022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:42.255598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:48:42.457367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:48:42.467701 (kubelet)[3444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:48:42.519480 kubelet[3444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:48:42.519480 kubelet[3444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:48:42.519480 kubelet[3444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:48:42.519989 kubelet[3444]: I0213 20:48:42.519577 3444 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:48:42.523976 kubelet[3444]: I0213 20:48:42.523947 3444 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:48:42.523976 kubelet[3444]: I0213 20:48:42.523969 3444 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:48:42.524292 kubelet[3444]: I0213 20:48:42.524272 3444 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:48:42.529014 kubelet[3444]: I0213 20:48:42.528518 3444 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:48:42.532746 kubelet[3444]: I0213 20:48:42.532582 3444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:48:42.540942 kubelet[3444]: I0213 20:48:42.540913 3444 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:48:42.541773 kubelet[3444]: I0213 20:48:42.541730 3444 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:48:42.542105 kubelet[3444]: I0213 20:48:42.541882 3444 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-faf44fbcb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:48:42.542326 kubelet[3444]: I0213 20:48:42.542311 3444 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:48:42.542458 kubelet[3444]: I0213 20:48:42.542398 3444 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:48:42.542458 kubelet[3444]: I0213 20:48:42.542458 3444 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:48:42.542590 kubelet[3444]: I0213 20:48:42.542580 3444 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:48:42.542648 kubelet[3444]: I0213 20:48:42.542598 3444 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:48:42.542870 kubelet[3444]: I0213 20:48:42.542797 3444 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:48:42.542870 kubelet[3444]: I0213 20:48:42.542824 3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:48:42.545297 kubelet[3444]: I0213 20:48:42.545276 3444 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:48:42.545484 kubelet[3444]: I0213 20:48:42.545458 3444 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:48:42.545926 kubelet[3444]: I0213 20:48:42.545900 3444 server.go:1264] "Started kubelet" Feb 13 20:48:42.549516 kubelet[3444]: I0213 20:48:42.549482 3444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:48:42.556836 kubelet[3444]: I0213 20:48:42.556461 3444 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:48:42.557564 kubelet[3444]: I0213 20:48:42.557545 3444 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:48:42.558676 kubelet[3444]: I0213 20:48:42.558625 3444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:48:42.558862 kubelet[3444]: I0213 20:48:42.558845 3444 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:48:42.563284 kubelet[3444]: I0213 20:48:42.561640 3444 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:48:42.563614 kubelet[3444]: I0213 20:48:42.563598 3444 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:48:42.563748 kubelet[3444]: I0213 20:48:42.563737 3444 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:48:42.568197 kubelet[3444]: I0213 20:48:42.565749 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:48:42.568197 kubelet[3444]: I0213 20:48:42.566875 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:48:42.568197 kubelet[3444]: I0213 20:48:42.566898 3444 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:48:42.568197 kubelet[3444]: I0213 20:48:42.566917 3444 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:48:42.568197 kubelet[3444]: E0213 20:48:42.566961 3444 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:48:42.584244 kubelet[3444]: I0213 20:48:42.584227 3444 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:48:42.584379 kubelet[3444]: I0213 20:48:42.584365 3444 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:48:42.585054 kubelet[3444]: I0213 20:48:42.584510 3444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:48:42.585788 kubelet[3444]: E0213 20:48:42.585722 3444 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:48:42.646607 kubelet[3444]: I0213 20:48:42.646147 3444 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:48:42.646607 kubelet[3444]: I0213 20:48:42.646271 3444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:48:42.646607 kubelet[3444]: I0213 20:48:42.646295 3444 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:48:42.647455 kubelet[3444]: I0213 20:48:42.646701 3444 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:48:42.647455 kubelet[3444]: I0213 20:48:42.646718 3444 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:48:42.647455 kubelet[3444]: I0213 20:48:42.646743 3444 policy_none.go:49] "None policy: Start" Feb 13 20:48:42.648264 kubelet[3444]: I0213 20:48:42.648163 3444 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:48:42.648264 kubelet[3444]: I0213 20:48:42.648197 3444 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:48:42.648854 kubelet[3444]: I0213 20:48:42.648356 3444 state_mem.go:75] "Updated machine memory state" Feb 13 20:48:42.651206 kubelet[3444]: I0213 20:48:42.650220 3444 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:48:42.651206 kubelet[3444]: I0213 20:48:42.650410 3444 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:48:42.651206 kubelet[3444]: I0213 20:48:42.650510 3444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:48:42.668168 kubelet[3444]: I0213 20:48:42.668084 3444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.668600 kubelet[3444]: I0213 20:48:42.668450 3444 topology_manager.go:215] "Topology Admit Handler" podUID="8b9f01879f02fb5e4ae2ff07cea90359" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.669917 kubelet[3444]: I0213 20:48:42.669889 3444 topology_manager.go:215] "Topology Admit Handler" podUID="c40c1527ea00ae616b4ace0381bcc2f4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.671366 kubelet[3444]: I0213 20:48:42.671339 3444 topology_manager.go:215] "Topology Admit Handler" podUID="f1ca971aca7b1884cec9bb19b84cadf6" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.928726 kubelet[3444]: I0213 20:48:42.924323 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.928726 kubelet[3444]: I0213 20:48:42.924372 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.928726 kubelet[3444]: I0213 20:48:42.924401 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.928726 kubelet[3444]: I0213 20:48:42.924456 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.928726 kubelet[3444]: I0213 20:48:42.924488 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.924548 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.924588 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b9f01879f02fb5e4ae2ff07cea90359-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-faf44fbcb5\" (UID: \"8b9f01879f02fb5e4ae2ff07cea90359\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.924620 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40c1527ea00ae616b4ace0381bcc2f4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-faf44fbcb5\" (UID: \"c40c1527ea00ae616b4ace0381bcc2f4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.924655 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1ca971aca7b1884cec9bb19b84cadf6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-faf44fbcb5\" (UID: \"f1ca971aca7b1884cec9bb19b84cadf6\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.927576 3444 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.929038 kubelet[3444]: I0213 20:48:42.927644 3444 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:48:42.938763 kubelet[3444]: W0213 20:48:42.938427 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:48:42.942106 kubelet[3444]: W0213 20:48:42.941872 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:48:42.942106 kubelet[3444]: W0213 20:48:42.942057 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:48:43.551801 kubelet[3444]: I0213 20:48:43.551608 3444 apiserver.go:52] "Watching apiserver" Feb 13 20:48:43.564241 kubelet[3444]: I0213 20:48:43.564191 3444 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:48:43.758878 kubelet[3444]: I0213 20:48:43.758795 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-faf44fbcb5" podStartSLOduration=1.758771359 podStartE2EDuration="1.758771359s" podCreationTimestamp="2025-02-13 20:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:48:43.725164389 +0000 UTC m=+1.253817142" watchObservedRunningTime="2025-02-13 20:48:43.758771359 +0000 UTC m=+1.287424212" Feb 13 20:48:43.794604 kubelet[3444]: I0213 20:48:43.793800 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-faf44fbcb5" podStartSLOduration=1.793777336 podStartE2EDuration="1.793777336s" podCreationTimestamp="2025-02-13 20:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:48:43.759646564 +0000 UTC m=+1.288299317" watchObservedRunningTime="2025-02-13 20:48:43.793777336 +0000 UTC m=+1.322430089" Feb 13 20:48:47.720746 sudo[2490]: pam_unix(sudo:session): session closed for user root Feb 13 20:48:47.821710 sshd[2486]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:47.825500 systemd[1]: sshd@6-10.200.8.38:22-10.200.16.10:55202.service: Deactivated successfully. Feb 13 20:48:47.831374 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:48:47.832232 systemd-logind[1766]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:48:47.833165 systemd-logind[1766]: Removed session 9. Feb 13 20:48:49.678817 kubelet[3444]: I0213 20:48:49.678710 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-faf44fbcb5" podStartSLOduration=7.678670163 podStartE2EDuration="7.678670163s" podCreationTimestamp="2025-02-13 20:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:48:43.795330144 +0000 UTC m=+1.323982897" watchObservedRunningTime="2025-02-13 20:48:49.678670163 +0000 UTC m=+7.207322916" Feb 13 20:48:57.032929 kubelet[3444]: I0213 20:48:57.032870 3444 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:48:57.035556 containerd[1789]: time="2025-02-13T20:48:57.035516669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:48:57.035983 kubelet[3444]: I0213 20:48:57.035831 3444 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:48:57.264104 kubelet[3444]: I0213 20:48:57.262195 3444 topology_manager.go:215] "Topology Admit Handler" podUID="6e73c75b-d593-4960-80fc-a70707489719" podNamespace="kube-system" podName="kube-proxy-fx9vs" Feb 13 20:48:57.326808 kubelet[3444]: I0213 20:48:57.326666 3444 topology_manager.go:215] "Topology Admit Handler" podUID="93ad41b3-0dbc-4b88-9a8c-c7f0de491716" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-wfpw5" Feb 13 20:48:57.415283 kubelet[3444]: I0213 20:48:57.415142 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzsf\" (UniqueName: \"kubernetes.io/projected/93ad41b3-0dbc-4b88-9a8c-c7f0de491716-kube-api-access-lxzsf\") pod \"tigera-operator-7bc55997bb-wfpw5\" (UID: \"93ad41b3-0dbc-4b88-9a8c-c7f0de491716\") " pod="tigera-operator/tigera-operator-7bc55997bb-wfpw5" Feb 13 20:48:57.415540 kubelet[3444]: I0213 20:48:57.415336 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e73c75b-d593-4960-80fc-a70707489719-xtables-lock\") pod \"kube-proxy-fx9vs\" (UID: \"6e73c75b-d593-4960-80fc-a70707489719\") " pod="kube-system/kube-proxy-fx9vs" Feb 13 20:48:57.415540 kubelet[3444]: I0213 20:48:57.415375 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvfr\" (UniqueName: \"kubernetes.io/projected/6e73c75b-d593-4960-80fc-a70707489719-kube-api-access-tmvfr\") pod \"kube-proxy-fx9vs\" (UID: \"6e73c75b-d593-4960-80fc-a70707489719\") " pod="kube-system/kube-proxy-fx9vs" Feb 13 20:48:57.415540 kubelet[3444]: I0213 20:48:57.415469 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93ad41b3-0dbc-4b88-9a8c-c7f0de491716-var-lib-calico\") pod \"tigera-operator-7bc55997bb-wfpw5\" (UID: \"93ad41b3-0dbc-4b88-9a8c-c7f0de491716\") " pod="tigera-operator/tigera-operator-7bc55997bb-wfpw5" Feb 13 20:48:57.415751 kubelet[3444]: I0213 20:48:57.415539 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e73c75b-d593-4960-80fc-a70707489719-lib-modules\") pod \"kube-proxy-fx9vs\" (UID: \"6e73c75b-d593-4960-80fc-a70707489719\") " pod="kube-system/kube-proxy-fx9vs" Feb 13 20:48:57.415751 kubelet[3444]: I0213 20:48:57.415568 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e73c75b-d593-4960-80fc-a70707489719-kube-proxy\") pod \"kube-proxy-fx9vs\" (UID: \"6e73c75b-d593-4960-80fc-a70707489719\") " pod="kube-system/kube-proxy-fx9vs" Feb 13 20:48:57.569372 containerd[1789]: time="2025-02-13T20:48:57.569329877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx9vs,Uid:6e73c75b-d593-4960-80fc-a70707489719,Namespace:kube-system,Attempt:0,}" Feb 13 20:48:57.608048 containerd[1789]: time="2025-02-13T20:48:57.607828342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:48:57.608048 containerd[1789]: time="2025-02-13T20:48:57.607880242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:48:57.608048 containerd[1789]: time="2025-02-13T20:48:57.607896342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:57.608426 containerd[1789]: time="2025-02-13T20:48:57.607983043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:57.634326 containerd[1789]: time="2025-02-13T20:48:57.634263187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wfpw5,Uid:93ad41b3-0dbc-4b88-9a8c-c7f0de491716,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:48:57.647945 containerd[1789]: time="2025-02-13T20:48:57.647900511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx9vs,Uid:6e73c75b-d593-4960-80fc-a70707489719,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7ce4311bdf2b80dbe5acfb6ce45a3ba94c13e90c9ade36bc457d36b1e027fb3\"" Feb 13 20:48:57.651200 containerd[1789]: time="2025-02-13T20:48:57.651161016Z" level=info msg="CreateContainer within sandbox \"c7ce4311bdf2b80dbe5acfb6ce45a3ba94c13e90c9ade36bc457d36b1e027fb3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:48:57.688403 containerd[1789]: time="2025-02-13T20:48:57.688230379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:48:57.688403 containerd[1789]: time="2025-02-13T20:48:57.688309879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:48:57.688403 containerd[1789]: time="2025-02-13T20:48:57.688332479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:57.688645 containerd[1789]: time="2025-02-13T20:48:57.688454880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:48:57.698465 containerd[1789]: time="2025-02-13T20:48:57.698395996Z" level=info msg="CreateContainer within sandbox \"c7ce4311bdf2b80dbe5acfb6ce45a3ba94c13e90c9ade36bc457d36b1e027fb3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8321c257c774333b300c3f8f41405d802ddd879ce67661aa88ffa21dd72125e\"" Feb 13 20:48:57.700272 containerd[1789]: time="2025-02-13T20:48:57.699367298Z" level=info msg="StartContainer for \"d8321c257c774333b300c3f8f41405d802ddd879ce67661aa88ffa21dd72125e\"" Feb 13 20:48:57.764560 containerd[1789]: time="2025-02-13T20:48:57.764495309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wfpw5,Uid:93ad41b3-0dbc-4b88-9a8c-c7f0de491716,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"daf89cc60e524d59e18cb5d2be7db24d4a8bfda79ae70102feb0655a2be6af63\"" Feb 13 20:48:57.768614 containerd[1789]: time="2025-02-13T20:48:57.768576016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:48:57.776291 containerd[1789]: time="2025-02-13T20:48:57.774551726Z" level=info msg="StartContainer for \"d8321c257c774333b300c3f8f41405d802ddd879ce67661aa88ffa21dd72125e\" returns successfully" Feb 13 20:49:02.075764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731846311.mount: Deactivated successfully. Feb 13 20:49:02.722038 containerd[1789]: time="2025-02-13T20:49:02.721964962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:02.724051 containerd[1789]: time="2025-02-13T20:49:02.723990673Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:49:02.727126 containerd[1789]: time="2025-02-13T20:49:02.727075791Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:02.730468 containerd[1789]: time="2025-02-13T20:49:02.730420610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:02.731176 containerd[1789]: time="2025-02-13T20:49:02.731136914Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.962521998s" Feb 13 20:49:02.731281 containerd[1789]: time="2025-02-13T20:49:02.731194514Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:49:02.733924 containerd[1789]: time="2025-02-13T20:49:02.733798429Z" level=info msg="CreateContainer within sandbox \"daf89cc60e524d59e18cb5d2be7db24d4a8bfda79ae70102feb0655a2be6af63\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:49:02.767102 containerd[1789]: time="2025-02-13T20:49:02.767057219Z" level=info msg="CreateContainer within sandbox \"daf89cc60e524d59e18cb5d2be7db24d4a8bfda79ae70102feb0655a2be6af63\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63ac12ee10a3381e7e2209732ee9b3eb1b288d098842e8fffaeb5e871b54ecb0\"" Feb 13 20:49:02.768025 containerd[1789]: time="2025-02-13T20:49:02.767777923Z" level=info msg="StartContainer for \"63ac12ee10a3381e7e2209732ee9b3eb1b288d098842e8fffaeb5e871b54ecb0\"" Feb 13 20:49:02.831220 containerd[1789]: time="2025-02-13T20:49:02.830673983Z" level=info msg="StartContainer for \"63ac12ee10a3381e7e2209732ee9b3eb1b288d098842e8fffaeb5e871b54ecb0\" returns successfully" Feb 13 20:49:03.664499 kubelet[3444]: I0213 20:49:03.664033 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx9vs" podStartSLOduration=6.664012742 podStartE2EDuration="6.664012742s" podCreationTimestamp="2025-02-13 20:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:48:58.647115911 +0000 UTC m=+16.175768664" watchObservedRunningTime="2025-02-13 20:49:03.664012742 +0000 UTC m=+21.192665495" Feb 13 20:49:03.664499 kubelet[3444]: I0213 20:49:03.664224 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-wfpw5" podStartSLOduration=1.698431936 podStartE2EDuration="6.664212244s" podCreationTimestamp="2025-02-13 20:48:57 +0000 UTC" firstStartedPulling="2025-02-13 20:48:57.766466012 +0000 UTC m=+15.295118765" lastFinishedPulling="2025-02-13 20:49:02.73224632 +0000 UTC m=+20.260899073" observedRunningTime="2025-02-13 20:49:03.663932642 +0000 UTC m=+21.192585495" watchObservedRunningTime="2025-02-13 20:49:03.664212244 +0000 UTC m=+21.192864997" Feb 13 20:49:05.921634 kubelet[3444]: I0213 20:49:05.921588 3444 topology_manager.go:215] "Topology Admit Handler" podUID="4b463ba5-ce08-45b1-9c78-ced07cf57e78" podNamespace="calico-system" podName="calico-typha-65b56bf5c5-bw7sn" Feb 13 20:49:06.074444 kubelet[3444]: I0213 20:49:06.074254 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b463ba5-ce08-45b1-9c78-ced07cf57e78-tigera-ca-bundle\") pod \"calico-typha-65b56bf5c5-bw7sn\" (UID: \"4b463ba5-ce08-45b1-9c78-ced07cf57e78\") " pod="calico-system/calico-typha-65b56bf5c5-bw7sn" Feb 13 20:49:06.074444 kubelet[3444]: I0213 20:49:06.074316 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b463ba5-ce08-45b1-9c78-ced07cf57e78-typha-certs\") pod \"calico-typha-65b56bf5c5-bw7sn\" (UID: \"4b463ba5-ce08-45b1-9c78-ced07cf57e78\") " pod="calico-system/calico-typha-65b56bf5c5-bw7sn" Feb 13 20:49:06.074444 kubelet[3444]: I0213 20:49:06.074349 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mp5r\" (UniqueName: \"kubernetes.io/projected/4b463ba5-ce08-45b1-9c78-ced07cf57e78-kube-api-access-5mp5r\") pod \"calico-typha-65b56bf5c5-bw7sn\" (UID: \"4b463ba5-ce08-45b1-9c78-ced07cf57e78\") " pod="calico-system/calico-typha-65b56bf5c5-bw7sn" Feb 13 20:49:06.118007 kubelet[3444]: I0213 20:49:06.116777 3444 topology_manager.go:215] "Topology Admit Handler" podUID="7cf7a167-1930-4b49-b972-ccae95985864" podNamespace="calico-system" podName="calico-node-p94ht" Feb 13 20:49:06.242375 containerd[1789]: time="2025-02-13T20:49:06.242243269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b56bf5c5-bw7sn,Uid:4b463ba5-ce08-45b1-9c78-ced07cf57e78,Namespace:calico-system,Attempt:0,}" Feb 13 20:49:06.251413 kubelet[3444]: I0213 20:49:06.250984 3444 topology_manager.go:215] "Topology Admit Handler" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" podNamespace="calico-system" podName="csi-node-driver-rnpcs" Feb 13 20:49:06.251413 kubelet[3444]: E0213 20:49:06.251399 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:06.277932 kubelet[3444]: I0213 20:49:06.275794 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-cni-net-dir\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.277932 kubelet[3444]: I0213 20:49:06.275838 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6rq\" (UniqueName: \"kubernetes.io/projected/7cf7a167-1930-4b49-b972-ccae95985864-kube-api-access-zg6rq\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.277932 kubelet[3444]: I0213 20:49:06.275865 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cf7a167-1930-4b49-b972-ccae95985864-tigera-ca-bundle\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.277932 kubelet[3444]: I0213 20:49:06.275889 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-var-lib-calico\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.277932 kubelet[3444]: I0213 20:49:06.275915 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7cf7a167-1930-4b49-b972-ccae95985864-node-certs\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278287 kubelet[3444]: I0213 20:49:06.275938 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-cni-bin-dir\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278287 kubelet[3444]: I0213 20:49:06.275958 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-cni-log-dir\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278287 kubelet[3444]: I0213 20:49:06.275983 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-flexvol-driver-host\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278287 kubelet[3444]: I0213 20:49:06.276003 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-policysync\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278287 kubelet[3444]: I0213 20:49:06.276043 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-var-run-calico\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278487 kubelet[3444]: I0213 20:49:06.276068 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-xtables-lock\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.278487 kubelet[3444]: I0213 20:49:06.276090 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf7a167-1930-4b49-b972-ccae95985864-lib-modules\") pod \"calico-node-p94ht\" (UID: \"7cf7a167-1930-4b49-b972-ccae95985864\") " pod="calico-system/calico-node-p94ht" Feb 13 20:49:06.300821 containerd[1789]: time="2025-02-13T20:49:06.298466490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:06.300821 containerd[1789]: time="2025-02-13T20:49:06.298536090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:06.300821 containerd[1789]: time="2025-02-13T20:49:06.298550990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:06.300821 containerd[1789]: time="2025-02-13T20:49:06.298645391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:06.377094 kubelet[3444]: I0213 20:49:06.376542 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc6e37-5802-45c9-b35d-a25b6e25224b-registration-dir\") pod \"csi-node-driver-rnpcs\" (UID: \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\") " pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:06.377094 kubelet[3444]: I0213 20:49:06.376584 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg669\" (UniqueName: \"kubernetes.io/projected/bdbc6e37-5802-45c9-b35d-a25b6e25224b-kube-api-access-zg669\") pod \"csi-node-driver-rnpcs\" (UID: \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\") " pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:06.377094 kubelet[3444]: I0213 20:49:06.376629 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc6e37-5802-45c9-b35d-a25b6e25224b-kubelet-dir\") pod \"csi-node-driver-rnpcs\" (UID: \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\") " pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:06.377094 kubelet[3444]: I0213 20:49:06.376737 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bdbc6e37-5802-45c9-b35d-a25b6e25224b-varrun\") pod \"csi-node-driver-rnpcs\" (UID: \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\") " pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:06.377094 kubelet[3444]: I0213 20:49:06.376783 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc6e37-5802-45c9-b35d-a25b6e25224b-socket-dir\") pod \"csi-node-driver-rnpcs\" (UID: \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\") " pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:06.384157 kubelet[3444]: E0213 20:49:06.382554 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.384157 kubelet[3444]: W0213 20:49:06.383126 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.384157 kubelet[3444]: E0213 20:49:06.383162 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.386406 kubelet[3444]: E0213 20:49:06.386123 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.386406 kubelet[3444]: W0213 20:49:06.386145 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.386406 kubelet[3444]: E0213 20:49:06.386165 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.390064 kubelet[3444]: E0213 20:49:06.388066 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.390064 kubelet[3444]: W0213 20:49:06.388082 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.390064 kubelet[3444]: E0213 20:49:06.389218 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.390064 kubelet[3444]: E0213 20:49:06.389837 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.390064 kubelet[3444]: W0213 20:49:06.389850 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.390064 kubelet[3444]: E0213 20:49:06.389864 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.391867 kubelet[3444]: E0213 20:49:06.391740 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.392978 kubelet[3444]: W0213 20:49:06.391760 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.392978 kubelet[3444]: E0213 20:49:06.392696 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.394905 kubelet[3444]: E0213 20:49:06.394860 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.394905 kubelet[3444]: W0213 20:49:06.394878 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.395931 kubelet[3444]: E0213 20:49:06.394896 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.397369 kubelet[3444]: E0213 20:49:06.397216 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.397369 kubelet[3444]: W0213 20:49:06.397235 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.397369 kubelet[3444]: E0213 20:49:06.397249 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.398664 kubelet[3444]: E0213 20:49:06.398647 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.398664 kubelet[3444]: W0213 20:49:06.398664 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.398834 kubelet[3444]: E0213 20:49:06.398678 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.401659 kubelet[3444]: E0213 20:49:06.401288 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.401659 kubelet[3444]: W0213 20:49:06.401304 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.401659 kubelet[3444]: E0213 20:49:06.401319 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.403235 containerd[1789]: time="2025-02-13T20:49:06.403091988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b56bf5c5-bw7sn,Uid:4b463ba5-ce08-45b1-9c78-ced07cf57e78,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4b7f7deb21c9754a0ad967cc0b3a89f52ae868c06b689e9014fe27a8821ad33\"" Feb 13 20:49:06.406736 containerd[1789]: time="2025-02-13T20:49:06.406329206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:49:06.430538 containerd[1789]: time="2025-02-13T20:49:06.429727340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p94ht,Uid:7cf7a167-1930-4b49-b972-ccae95985864,Namespace:calico-system,Attempt:0,}" Feb 13 20:49:06.477831 kubelet[3444]: E0213 20:49:06.477793 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.477831 kubelet[3444]: W0213 20:49:06.477818 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.478020 kubelet[3444]: E0213 20:49:06.477840 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.478424 kubelet[3444]: E0213 20:49:06.478120 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.478424 kubelet[3444]: W0213 20:49:06.478132 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.478424 kubelet[3444]: E0213 20:49:06.478154 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.478424 kubelet[3444]: E0213 20:49:06.478414 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.478773 kubelet[3444]: W0213 20:49:06.478452 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.478773 kubelet[3444]: E0213 20:49:06.478468 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.479008 kubelet[3444]: E0213 20:49:06.478915 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.479008 kubelet[3444]: W0213 20:49:06.478928 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.479008 kubelet[3444]: E0213 20:49:06.478956 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.479451 kubelet[3444]: E0213 20:49:06.479358 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.479451 kubelet[3444]: W0213 20:49:06.479370 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.479451 kubelet[3444]: E0213 20:49:06.479394 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.479975 kubelet[3444]: E0213 20:49:06.479881 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.479975 kubelet[3444]: W0213 20:49:06.479895 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.479975 kubelet[3444]: E0213 20:49:06.479921 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.480586 kubelet[3444]: E0213 20:49:06.480438 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.480586 kubelet[3444]: W0213 20:49:06.480453 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.480586 kubelet[3444]: E0213 20:49:06.480477 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.480763 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.481657 kubelet[3444]: W0213 20:49:06.480776 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.480816 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.481039 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.481657 kubelet[3444]: W0213 20:49:06.481048 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.481066 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.481304 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.481657 kubelet[3444]: W0213 20:49:06.481315 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.481336 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.481657 kubelet[3444]: E0213 20:49:06.481644 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.482096 kubelet[3444]: W0213 20:49:06.481660 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.482096 kubelet[3444]: E0213 20:49:06.481696 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.482096 kubelet[3444]: E0213 20:49:06.481945 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.482096 kubelet[3444]: W0213 20:49:06.481956 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.482096 kubelet[3444]: E0213 20:49:06.481975 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.482526 kubelet[3444]: E0213 20:49:06.482503 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.482526 kubelet[3444]: W0213 20:49:06.482526 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.482675 kubelet[3444]: E0213 20:49:06.482656 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.482861 kubelet[3444]: E0213 20:49:06.482829 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.482914 kubelet[3444]: W0213 20:49:06.482874 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.482914 kubelet[3444]: E0213 20:49:06.482896 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.483294 kubelet[3444]: E0213 20:49:06.483277 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.483294 kubelet[3444]: W0213 20:49:06.483293 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.483408 kubelet[3444]: E0213 20:49:06.483310 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.483560 kubelet[3444]: E0213 20:49:06.483543 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.483560 kubelet[3444]: W0213 20:49:06.483560 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.483747 kubelet[3444]: E0213 20:49:06.483728 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.483997 kubelet[3444]: E0213 20:49:06.483826 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.483997 kubelet[3444]: W0213 20:49:06.483837 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.483997 kubelet[3444]: E0213 20:49:06.483859 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.484248 kubelet[3444]: E0213 20:49:06.484230 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.484248 kubelet[3444]: W0213 20:49:06.484247 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.484352 kubelet[3444]: E0213 20:49:06.484278 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.484626 kubelet[3444]: E0213 20:49:06.484609 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.484694 kubelet[3444]: W0213 20:49:06.484642 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.484694 kubelet[3444]: E0213 20:49:06.484667 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.485207 kubelet[3444]: E0213 20:49:06.484873 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.485207 kubelet[3444]: W0213 20:49:06.484903 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.485207 kubelet[3444]: E0213 20:49:06.484925 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.485207 kubelet[3444]: E0213 20:49:06.485122 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.485207 kubelet[3444]: W0213 20:49:06.485131 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.485207 kubelet[3444]: E0213 20:49:06.485205 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.485494 kubelet[3444]: E0213 20:49:06.485368 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.485494 kubelet[3444]: W0213 20:49:06.485377 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.485494 kubelet[3444]: E0213 20:49:06.485475 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.485632 kubelet[3444]: E0213 20:49:06.485623 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.485632 kubelet[3444]: W0213 20:49:06.485631 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.485713 kubelet[3444]: E0213 20:49:06.485645 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.486040 kubelet[3444]: E0213 20:49:06.485859 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.486040 kubelet[3444]: W0213 20:49:06.485872 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.486040 kubelet[3444]: E0213 20:49:06.485898 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.486363 containerd[1789]: time="2025-02-13T20:49:06.486276963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:06.487410 kubelet[3444]: E0213 20:49:06.486494 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.487410 kubelet[3444]: W0213 20:49:06.486506 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.487410 kubelet[3444]: E0213 20:49:06.486519 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.487609 containerd[1789]: time="2025-02-13T20:49:06.487410769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:06.487609 containerd[1789]: time="2025-02-13T20:49:06.487452869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:06.487814 containerd[1789]: time="2025-02-13T20:49:06.487698371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:06.496839 kubelet[3444]: E0213 20:49:06.496720 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:06.496839 kubelet[3444]: W0213 20:49:06.496741 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:06.496839 kubelet[3444]: E0213 20:49:06.496757 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:06.532570 containerd[1789]: time="2025-02-13T20:49:06.532525827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p94ht,Uid:7cf7a167-1930-4b49-b972-ccae95985864,Namespace:calico-system,Attempt:0,} returns sandbox id \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\"" Feb 13 20:49:07.567935 kubelet[3444]: E0213 20:49:07.567872 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:07.906695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667876901.mount: Deactivated successfully. Feb 13 20:49:08.653747 containerd[1789]: time="2025-02-13T20:49:08.653699943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:08.656430 containerd[1789]: time="2025-02-13T20:49:08.656373658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:49:08.660779 containerd[1789]: time="2025-02-13T20:49:08.660745783Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:08.666478 containerd[1789]: time="2025-02-13T20:49:08.666404415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:08.667392 containerd[1789]: time="2025-02-13T20:49:08.667079419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.260708713s" Feb 13 20:49:08.667392 containerd[1789]: time="2025-02-13T20:49:08.667115319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:49:08.668976 containerd[1789]: time="2025-02-13T20:49:08.668259026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:49:08.686228 containerd[1789]: time="2025-02-13T20:49:08.686172428Z" level=info msg="CreateContainer within sandbox \"c4b7f7deb21c9754a0ad967cc0b3a89f52ae868c06b689e9014fe27a8821ad33\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:49:08.722312 containerd[1789]: time="2025-02-13T20:49:08.722277434Z" level=info msg="CreateContainer within sandbox \"c4b7f7deb21c9754a0ad967cc0b3a89f52ae868c06b689e9014fe27a8821ad33\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"182f65e988949f8ff696f9785a02015e6a663977621f4260df9774435c8a4cd8\"" Feb 13 20:49:08.722923 containerd[1789]: time="2025-02-13T20:49:08.722845537Z" level=info msg="StartContainer for \"182f65e988949f8ff696f9785a02015e6a663977621f4260df9774435c8a4cd8\"" Feb 13 20:49:08.794979 containerd[1789]: time="2025-02-13T20:49:08.794910149Z" level=info msg="StartContainer for \"182f65e988949f8ff696f9785a02015e6a663977621f4260df9774435c8a4cd8\" returns successfully" Feb 13 20:49:09.567991 kubelet[3444]: E0213 20:49:09.567925 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:09.678103 kubelet[3444]: I0213 20:49:09.678043 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65b56bf5c5-bw7sn" podStartSLOduration=2.415631871 podStartE2EDuration="4.678005893s" podCreationTimestamp="2025-02-13 20:49:05 +0000 UTC" firstStartedPulling="2025-02-13 20:49:06.405659702 +0000 UTC m=+23.934312555" lastFinishedPulling="2025-02-13 20:49:08.668033824 +0000 UTC m=+26.196686577" observedRunningTime="2025-02-13 20:49:09.677550991 +0000 UTC m=+27.206203944" watchObservedRunningTime="2025-02-13 20:49:09.678005893 +0000 UTC m=+27.206658646" Feb 13 20:49:09.738688 kubelet[3444]: E0213 20:49:09.738651 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.738688 kubelet[3444]: W0213 20:49:09.738679 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.738688 kubelet[3444]: E0213 20:49:09.738705 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.739118 kubelet[3444]: E0213 20:49:09.738986 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.739118 kubelet[3444]: W0213 20:49:09.739000 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.739118 kubelet[3444]: E0213 20:49:09.739017 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.739389 kubelet[3444]: E0213 20:49:09.739285 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.739389 kubelet[3444]: W0213 20:49:09.739298 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.739389 kubelet[3444]: E0213 20:49:09.739316 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.739633 kubelet[3444]: E0213 20:49:09.739557 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.739633 kubelet[3444]: W0213 20:49:09.739569 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.739633 kubelet[3444]: E0213 20:49:09.739585 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.739877 kubelet[3444]: E0213 20:49:09.739830 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.739877 kubelet[3444]: W0213 20:49:09.739842 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.739877 kubelet[3444]: E0213 20:49:09.739857 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.740238 kubelet[3444]: E0213 20:49:09.740095 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.740238 kubelet[3444]: W0213 20:49:09.740109 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.740238 kubelet[3444]: E0213 20:49:09.740128 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.740377 kubelet[3444]: E0213 20:49:09.740348 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.740377 kubelet[3444]: W0213 20:49:09.740358 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.740377 kubelet[3444]: E0213 20:49:09.740372 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.740571 kubelet[3444]: E0213 20:49:09.740557 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.740571 kubelet[3444]: W0213 20:49:09.740566 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.740656 kubelet[3444]: E0213 20:49:09.740577 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.740787 kubelet[3444]: E0213 20:49:09.740771 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.740787 kubelet[3444]: W0213 20:49:09.740784 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.740919 kubelet[3444]: E0213 20:49:09.740796 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.741005 kubelet[3444]: E0213 20:49:09.740987 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.741005 kubelet[3444]: W0213 20:49:09.741001 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.741130 kubelet[3444]: E0213 20:49:09.741013 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.741297 kubelet[3444]: E0213 20:49:09.741209 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.741297 kubelet[3444]: W0213 20:49:09.741220 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.741297 kubelet[3444]: E0213 20:49:09.741233 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.741473 kubelet[3444]: E0213 20:49:09.741426 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.741473 kubelet[3444]: W0213 20:49:09.741436 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.741473 kubelet[3444]: E0213 20:49:09.741448 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.741644 kubelet[3444]: E0213 20:49:09.741635 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.741716 kubelet[3444]: W0213 20:49:09.741644 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.741716 kubelet[3444]: E0213 20:49:09.741655 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.741869 kubelet[3444]: E0213 20:49:09.741858 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.741923 kubelet[3444]: W0213 20:49:09.741870 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.741923 kubelet[3444]: E0213 20:49:09.741883 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.742090 kubelet[3444]: E0213 20:49:09.742076 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.742090 kubelet[3444]: W0213 20:49:09.742088 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.742207 kubelet[3444]: E0213 20:49:09.742100 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.813354 kubelet[3444]: E0213 20:49:09.813315 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.813354 kubelet[3444]: W0213 20:49:09.813346 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.813591 kubelet[3444]: E0213 20:49:09.813374 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.813792 kubelet[3444]: E0213 20:49:09.813768 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.813792 kubelet[3444]: W0213 20:49:09.813787 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.813999 kubelet[3444]: E0213 20:49:09.813822 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.814201 kubelet[3444]: E0213 20:49:09.814167 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.814356 kubelet[3444]: W0213 20:49:09.814201 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.814356 kubelet[3444]: E0213 20:49:09.814228 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.814562 kubelet[3444]: E0213 20:49:09.814543 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.814562 kubelet[3444]: W0213 20:49:09.814558 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.814747 kubelet[3444]: E0213 20:49:09.814580 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.814906 kubelet[3444]: E0213 20:49:09.814885 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.814906 kubelet[3444]: W0213 20:49:09.814902 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.815109 kubelet[3444]: E0213 20:49:09.814926 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.815278 kubelet[3444]: E0213 20:49:09.815255 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.815369 kubelet[3444]: W0213 20:49:09.815305 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.815369 kubelet[3444]: E0213 20:49:09.815332 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.815633 kubelet[3444]: E0213 20:49:09.815609 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.815633 kubelet[3444]: W0213 20:49:09.815626 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.815835 kubelet[3444]: E0213 20:49:09.815666 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.815970 kubelet[3444]: E0213 20:49:09.815937 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.815970 kubelet[3444]: W0213 20:49:09.815953 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.815970 kubelet[3444]: E0213 20:49:09.815983 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.816332 kubelet[3444]: E0213 20:49:09.816269 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.816332 kubelet[3444]: W0213 20:49:09.816283 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.816554 kubelet[3444]: E0213 20:49:09.816512 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.816750 kubelet[3444]: E0213 20:49:09.816597 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.816750 kubelet[3444]: W0213 20:49:09.816609 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.816750 kubelet[3444]: E0213 20:49:09.816632 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.816999 kubelet[3444]: E0213 20:49:09.816931 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.816999 kubelet[3444]: W0213 20:49:09.816961 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.816999 kubelet[3444]: E0213 20:49:09.816984 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.817332 kubelet[3444]: E0213 20:49:09.817311 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.817332 kubelet[3444]: W0213 20:49:09.817327 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.817713 kubelet[3444]: E0213 20:49:09.817349 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.817827 kubelet[3444]: E0213 20:49:09.817805 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.817827 kubelet[3444]: W0213 20:49:09.817823 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.817980 kubelet[3444]: E0213 20:49:09.817845 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.818616 kubelet[3444]: E0213 20:49:09.818137 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.818616 kubelet[3444]: W0213 20:49:09.818153 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.818616 kubelet[3444]: E0213 20:49:09.818274 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.818895 kubelet[3444]: E0213 20:49:09.818685 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.818895 kubelet[3444]: W0213 20:49:09.818698 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.819028 kubelet[3444]: E0213 20:49:09.818912 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.819028 kubelet[3444]: W0213 20:49:09.818923 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.819661 kubelet[3444]: E0213 20:49:09.819097 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.819661 kubelet[3444]: E0213 20:49:09.819544 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.819661 kubelet[3444]: E0213 20:49:09.819642 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.819661 kubelet[3444]: W0213 20:49:09.819656 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.820138 kubelet[3444]: E0213 20:49:09.819678 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:09.820138 kubelet[3444]: E0213 20:49:09.819907 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:49:09.820138 kubelet[3444]: W0213 20:49:09.819919 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:49:09.820138 kubelet[3444]: E0213 20:49:09.819933 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:49:10.059467 containerd[1789]: time="2025-02-13T20:49:10.059409864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:10.062208 containerd[1789]: time="2025-02-13T20:49:10.062134879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:49:10.065517 containerd[1789]: time="2025-02-13T20:49:10.065453298Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:10.069770 containerd[1789]: time="2025-02-13T20:49:10.069649121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:10.070817 containerd[1789]: time="2025-02-13T20:49:10.070338125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.402042899s" Feb 13 20:49:10.070817 containerd[1789]: time="2025-02-13T20:49:10.070380125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:49:10.073067 containerd[1789]: time="2025-02-13T20:49:10.073036040Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:49:10.119812 containerd[1789]: time="2025-02-13T20:49:10.119748098Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003\"" Feb 13 20:49:10.121370 containerd[1789]: time="2025-02-13T20:49:10.121075505Z" level=info msg="StartContainer for \"ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003\"" Feb 13 20:49:10.214437 containerd[1789]: time="2025-02-13T20:49:10.214394722Z" level=info msg="StartContainer for \"ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003\" returns successfully" Feb 13 20:49:10.262627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003-rootfs.mount: Deactivated successfully. Feb 13 20:49:10.665945 kubelet[3444]: I0213 20:49:10.665914 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:49:11.621637 kubelet[3444]: E0213 20:49:11.567384 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:11.640557 containerd[1789]: time="2025-02-13T20:49:11.640467911Z" level=info msg="shim disconnected" id=ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003 namespace=k8s.io Feb 13 20:49:11.640557 containerd[1789]: time="2025-02-13T20:49:11.640532511Z" level=warning msg="cleaning up after shim disconnected" id=ceb250a80a7b91bab4ba151ae70cbad42e02b8567b92dcbaed6179a5b88be003 namespace=k8s.io Feb 13 20:49:11.640557 containerd[1789]: time="2025-02-13T20:49:11.640543911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:49:11.652561 containerd[1789]: time="2025-02-13T20:49:11.652504977Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:49:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:49:11.670882 containerd[1789]: time="2025-02-13T20:49:11.670640778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:49:13.567935 kubelet[3444]: E0213 20:49:13.567875 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:15.567265 kubelet[3444]: E0213 20:49:15.567217 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:15.836195 containerd[1789]: time="2025-02-13T20:49:15.836055722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:15.838455 containerd[1789]: time="2025-02-13T20:49:15.838403135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:49:15.842514 containerd[1789]: time="2025-02-13T20:49:15.842471857Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:15.846559 containerd[1789]: time="2025-02-13T20:49:15.846510779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:15.847209 containerd[1789]: time="2025-02-13T20:49:15.847161283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.176470505s" Feb 13 20:49:15.847302 containerd[1789]: time="2025-02-13T20:49:15.847214883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:49:15.849947 containerd[1789]: time="2025-02-13T20:49:15.849748797Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:49:15.878947 containerd[1789]: time="2025-02-13T20:49:15.878906759Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060\"" Feb 13 20:49:15.880701 containerd[1789]: time="2025-02-13T20:49:15.879619663Z" level=info msg="StartContainer for \"14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060\"" Feb 13 20:49:15.942051 containerd[1789]: time="2025-02-13T20:49:15.942011008Z" level=info msg="StartContainer for \"14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060\" returns successfully" Feb 13 20:49:17.408455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060-rootfs.mount: Deactivated successfully. Feb 13 20:49:17.466600 kubelet[3444]: I0213 20:49:17.465977 3444 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:49:17.507045 kubelet[3444]: I0213 20:49:17.504748 3444 topology_manager.go:215] "Topology Admit Handler" podUID="f67249f5-a448-4596-816c-cb1a3d8e3628" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gsw2t" Feb 13 20:49:17.508833 kubelet[3444]: I0213 20:49:17.508799 3444 topology_manager.go:215] "Topology Admit Handler" podUID="d9bf7df7-0f43-4399-a9cf-00811b424924" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8dm2w" Feb 13 20:49:17.518134 kubelet[3444]: I0213 20:49:17.517388 3444 topology_manager.go:215] "Topology Admit Handler" podUID="29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88" podNamespace="calico-apiserver" podName="calico-apiserver-55b4cbcf97-vthz8" Feb 13 20:49:17.525784 kubelet[3444]: I0213 20:49:17.525655 3444 topology_manager.go:215] "Topology Admit Handler" podUID="0c72f7b9-06eb-4b56-8496-0b119694b5cc" podNamespace="calico-system" podName="calico-kube-controllers-5d5ff6cc45-sfbhx" Feb 13 20:49:17.526434 kubelet[3444]: I0213 20:49:17.526382 3444 topology_manager.go:215] "Topology Admit Handler" podUID="61879ee1-72fa-4d45-9726-2a5c594597b2" podNamespace="calico-apiserver" podName="calico-apiserver-55b4cbcf97-tcxq9" Feb 13 20:49:17.570981 containerd[1789]: time="2025-02-13T20:49:17.570555017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnpcs,Uid:bdbc6e37-5802-45c9-b35d-a25b6e25224b,Namespace:calico-system,Attempt:0,}" Feb 13 20:49:17.700117 kubelet[3444]: I0213 20:49:17.699967 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61879ee1-72fa-4d45-9726-2a5c594597b2-calico-apiserver-certs\") pod \"calico-apiserver-55b4cbcf97-tcxq9\" (UID: \"61879ee1-72fa-4d45-9726-2a5c594597b2\") " pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" Feb 13 20:49:17.700117 kubelet[3444]: I0213 20:49:17.700029 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9bf7df7-0f43-4399-a9cf-00811b424924-config-volume\") pod \"coredns-7db6d8ff4d-8dm2w\" (UID: \"d9bf7df7-0f43-4399-a9cf-00811b424924\") " pod="kube-system/coredns-7db6d8ff4d-8dm2w" Feb 13 20:49:17.700117 kubelet[3444]: I0213 20:49:17.700107 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm6j\" (UniqueName: \"kubernetes.io/projected/0c72f7b9-06eb-4b56-8496-0b119694b5cc-kube-api-access-scm6j\") pod \"calico-kube-controllers-5d5ff6cc45-sfbhx\" (UID: \"0c72f7b9-06eb-4b56-8496-0b119694b5cc\") " pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" Feb 13 20:49:17.700848 kubelet[3444]: I0213 20:49:17.700139 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp42p\" (UniqueName: \"kubernetes.io/projected/29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88-kube-api-access-mp42p\") pod \"calico-apiserver-55b4cbcf97-vthz8\" (UID: \"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88\") " pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" Feb 13 20:49:17.700848 kubelet[3444]: I0213 20:49:17.700164 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vw47\" (UniqueName: \"kubernetes.io/projected/61879ee1-72fa-4d45-9726-2a5c594597b2-kube-api-access-6vw47\") pod \"calico-apiserver-55b4cbcf97-tcxq9\" (UID: \"61879ee1-72fa-4d45-9726-2a5c594597b2\") " pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" Feb 13 20:49:17.700848 kubelet[3444]: I0213 20:49:17.700210 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c72f7b9-06eb-4b56-8496-0b119694b5cc-tigera-ca-bundle\") pod \"calico-kube-controllers-5d5ff6cc45-sfbhx\" (UID: \"0c72f7b9-06eb-4b56-8496-0b119694b5cc\") " pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" Feb 13 20:49:17.700848 kubelet[3444]: I0213 20:49:17.700255 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f67249f5-a448-4596-816c-cb1a3d8e3628-config-volume\") pod \"coredns-7db6d8ff4d-gsw2t\" (UID: \"f67249f5-a448-4596-816c-cb1a3d8e3628\") " pod="kube-system/coredns-7db6d8ff4d-gsw2t" Feb 13 20:49:17.700848 kubelet[3444]: I0213 20:49:17.700279 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gzq\" (UniqueName: \"kubernetes.io/projected/f67249f5-a448-4596-816c-cb1a3d8e3628-kube-api-access-s6gzq\") pod \"coredns-7db6d8ff4d-gsw2t\" (UID: \"f67249f5-a448-4596-816c-cb1a3d8e3628\") " pod="kube-system/coredns-7db6d8ff4d-gsw2t" Feb 13 20:49:17.701046 kubelet[3444]: I0213 20:49:17.700304 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkhb\" (UniqueName: \"kubernetes.io/projected/d9bf7df7-0f43-4399-a9cf-00811b424924-kube-api-access-rjkhb\") pod \"coredns-7db6d8ff4d-8dm2w\" (UID: \"d9bf7df7-0f43-4399-a9cf-00811b424924\") " pod="kube-system/coredns-7db6d8ff4d-8dm2w" Feb 13 20:49:17.701046 kubelet[3444]: I0213 20:49:17.700331 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88-calico-apiserver-certs\") pod \"calico-apiserver-55b4cbcf97-vthz8\" (UID: \"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88\") " pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" Feb 13 20:49:19.016868 containerd[1789]: time="2025-02-13T20:49:19.016778961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsw2t,Uid:f67249f5-a448-4596-816c-cb1a3d8e3628,Namespace:kube-system,Attempt:0,}" Feb 13 20:49:19.018958 containerd[1789]: time="2025-02-13T20:49:19.018012567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-vthz8,Uid:29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:49:19.018958 containerd[1789]: time="2025-02-13T20:49:19.018265269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5ff6cc45-sfbhx,Uid:0c72f7b9-06eb-4b56-8496-0b119694b5cc,Namespace:calico-system,Attempt:0,}" Feb 13 20:49:19.020707 containerd[1789]: time="2025-02-13T20:49:19.020454781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-tcxq9,Uid:61879ee1-72fa-4d45-9726-2a5c594597b2,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:49:19.021243 containerd[1789]: time="2025-02-13T20:49:19.021080284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dm2w,Uid:d9bf7df7-0f43-4399-a9cf-00811b424924,Namespace:kube-system,Attempt:0,}" Feb 13 20:49:19.032448 containerd[1789]: time="2025-02-13T20:49:19.032381646Z" level=info msg="shim disconnected" id=14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060 namespace=k8s.io Feb 13 20:49:19.032571 containerd[1789]: time="2025-02-13T20:49:19.032452246Z" level=warning msg="cleaning up after shim disconnected" id=14447b0c07f702506287abfbedf9ec315e98fe188ad623380e8786c573931060 namespace=k8s.io Feb 13 20:49:19.032571 containerd[1789]: time="2025-02-13T20:49:19.032464746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:49:19.357050 containerd[1789]: time="2025-02-13T20:49:19.356027118Z" level=error msg="Failed to destroy network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.357426 containerd[1789]: time="2025-02-13T20:49:19.357386925Z" level=error msg="encountered an error cleaning up failed sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.357947 containerd[1789]: time="2025-02-13T20:49:19.357706327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-vthz8,Uid:29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.358929 kubelet[3444]: E0213 20:49:19.358435 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.358929 kubelet[3444]: E0213 20:49:19.358521 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" Feb 13 20:49:19.358929 kubelet[3444]: E0213 20:49:19.358548 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" Feb 13 20:49:19.359785 kubelet[3444]: E0213 20:49:19.358606 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b4cbcf97-vthz8_calico-apiserver(29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b4cbcf97-vthz8_calico-apiserver(29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" podUID="29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88" Feb 13 20:49:19.377465 containerd[1789]: time="2025-02-13T20:49:19.377151633Z" level=error msg="Failed to destroy network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.377932 containerd[1789]: time="2025-02-13T20:49:19.377736936Z" level=error msg="encountered an error cleaning up failed sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.378066 containerd[1789]: time="2025-02-13T20:49:19.377815037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnpcs,Uid:bdbc6e37-5802-45c9-b35d-a25b6e25224b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.379695 kubelet[3444]: E0213 20:49:19.379248 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.379695 kubelet[3444]: E0213 20:49:19.379317 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:19.379695 kubelet[3444]: E0213 20:49:19.379342 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnpcs" Feb 13 20:49:19.379888 kubelet[3444]: E0213 20:49:19.379411 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rnpcs_calico-system(bdbc6e37-5802-45c9-b35d-a25b6e25224b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rnpcs_calico-system(bdbc6e37-5802-45c9-b35d-a25b6e25224b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:19.386894 containerd[1789]: time="2025-02-13T20:49:19.386856886Z" level=error msg="Failed to destroy network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.387282 containerd[1789]: time="2025-02-13T20:49:19.387245089Z" level=error msg="encountered an error cleaning up failed sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.387374 containerd[1789]: time="2025-02-13T20:49:19.387307889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dm2w,Uid:d9bf7df7-0f43-4399-a9cf-00811b424924,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.387923 kubelet[3444]: E0213 20:49:19.387561 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.387923 kubelet[3444]: E0213 20:49:19.387623 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8dm2w" Feb 13 20:49:19.387923 kubelet[3444]: E0213 20:49:19.387652 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8dm2w" Feb 13 20:49:19.388103 kubelet[3444]: E0213 20:49:19.387698 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8dm2w_kube-system(d9bf7df7-0f43-4399-a9cf-00811b424924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8dm2w_kube-system(d9bf7df7-0f43-4399-a9cf-00811b424924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8dm2w" podUID="d9bf7df7-0f43-4399-a9cf-00811b424924" Feb 13 20:49:19.412910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522-shm.mount: Deactivated successfully. Feb 13 20:49:19.413148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077-shm.mount: Deactivated successfully. Feb 13 20:49:19.413344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a-shm.mount: Deactivated successfully. Feb 13 20:49:19.416372 containerd[1789]: time="2025-02-13T20:49:19.414481838Z" level=error msg="Failed to destroy network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.416372 containerd[1789]: time="2025-02-13T20:49:19.414485138Z" level=error msg="Failed to destroy network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.419341 containerd[1789]: time="2025-02-13T20:49:19.419305464Z" level=error msg="Failed to destroy network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419638766Z" level=error msg="encountered an error cleaning up failed sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419670866Z" level=error msg="encountered an error cleaning up failed sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419696766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5ff6cc45-sfbhx,Uid:0c72f7b9-06eb-4b56-8496-0b119694b5cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419704866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-tcxq9,Uid:61879ee1-72fa-4d45-9726-2a5c594597b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419867367Z" level=error msg="encountered an error cleaning up failed sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422201 containerd[1789]: time="2025-02-13T20:49:19.419934367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsw2t,Uid:f67249f5-a448-4596-816c-cb1a3d8e3628,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422570 kubelet[3444]: E0213 20:49:19.419956 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.422570 kubelet[3444]: E0213 20:49:19.420017 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" Feb 13 20:49:19.422570 kubelet[3444]: E0213 20:49:19.420041 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" Feb 13 20:49:19.422423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e-shm.mount: Deactivated successfully. Feb 13 20:49:19.422828 kubelet[3444]: E0213 20:49:19.420090 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b4cbcf97-tcxq9_calico-apiserver(61879ee1-72fa-4d45-9726-2a5c594597b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b4cbcf97-tcxq9_calico-apiserver(61879ee1-72fa-4d45-9726-2a5c594597b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" podUID="61879ee1-72fa-4d45-9726-2a5c594597b2" Feb 13 20:49:19.422631 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76-shm.mount: Deactivated successfully. Feb 13 20:49:19.425218 kubelet[3444]: E0213 20:49:19.423152 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.425218 kubelet[3444]: E0213 20:49:19.424138 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gsw2t" Feb 13 20:49:19.425218 kubelet[3444]: E0213 20:49:19.424168 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gsw2t" Feb 13 20:49:19.425413 kubelet[3444]: E0213 20:49:19.424219 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gsw2t_kube-system(f67249f5-a448-4596-816c-cb1a3d8e3628)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gsw2t_kube-system(f67249f5-a448-4596-816c-cb1a3d8e3628)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gsw2t" podUID="f67249f5-a448-4596-816c-cb1a3d8e3628" Feb 13 20:49:19.425413 kubelet[3444]: E0213 20:49:19.424276 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.425413 kubelet[3444]: E0213 20:49:19.424305 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" Feb 13 20:49:19.425588 kubelet[3444]: E0213 20:49:19.424329 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" Feb 13 20:49:19.425588 kubelet[3444]: E0213 20:49:19.424363 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d5ff6cc45-sfbhx_calico-system(0c72f7b9-06eb-4b56-8496-0b119694b5cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d5ff6cc45-sfbhx_calico-system(0c72f7b9-06eb-4b56-8496-0b119694b5cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" podUID="0c72f7b9-06eb-4b56-8496-0b119694b5cc" Feb 13 20:49:19.427876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2-shm.mount: Deactivated successfully. Feb 13 20:49:19.688593 kubelet[3444]: I0213 20:49:19.688267 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:19.690082 containerd[1789]: time="2025-02-13T20:49:19.689539343Z" level=info msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" Feb 13 20:49:19.690082 containerd[1789]: time="2025-02-13T20:49:19.689766545Z" level=info msg="Ensure that sandbox 1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522 in task-service has been cleanup successfully" Feb 13 20:49:19.690904 kubelet[3444]: I0213 20:49:19.690868 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:19.691927 containerd[1789]: time="2025-02-13T20:49:19.691885656Z" level=info msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" Feb 13 20:49:19.692237 containerd[1789]: time="2025-02-13T20:49:19.692205058Z" level=info msg="Ensure that sandbox 94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76 in task-service has been cleanup successfully" Feb 13 20:49:19.696212 kubelet[3444]: I0213 20:49:19.695018 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:19.696301 containerd[1789]: time="2025-02-13T20:49:19.696012979Z" level=info msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" Feb 13 20:49:19.697203 containerd[1789]: time="2025-02-13T20:49:19.696170780Z" level=info msg="Ensure that sandbox 6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077 in task-service has been cleanup successfully" Feb 13 20:49:19.702109 kubelet[3444]: I0213 20:49:19.702088 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:19.702730 containerd[1789]: time="2025-02-13T20:49:19.702699415Z" level=info msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" Feb 13 20:49:19.703023 containerd[1789]: time="2025-02-13T20:49:19.702993217Z" level=info msg="Ensure that sandbox 7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2 in task-service has been cleanup successfully" Feb 13 20:49:19.707803 kubelet[3444]: I0213 20:49:19.707773 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:19.708451 containerd[1789]: time="2025-02-13T20:49:19.708327546Z" level=info msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" Feb 13 20:49:19.708633 containerd[1789]: time="2025-02-13T20:49:19.708498147Z" level=info msg="Ensure that sandbox 0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e in task-service has been cleanup successfully" Feb 13 20:49:19.714391 kubelet[3444]: I0213 20:49:19.714357 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:19.715847 containerd[1789]: time="2025-02-13T20:49:19.715809187Z" level=info msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" Feb 13 20:49:19.718909 containerd[1789]: time="2025-02-13T20:49:19.717365696Z" level=info msg="Ensure that sandbox 420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a in task-service has been cleanup successfully" Feb 13 20:49:19.729783 containerd[1789]: time="2025-02-13T20:49:19.729299261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:49:19.782561 containerd[1789]: time="2025-02-13T20:49:19.782506652Z" level=error msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" failed" error="failed to destroy network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.782976 kubelet[3444]: E0213 20:49:19.782752 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:19.782976 kubelet[3444]: E0213 20:49:19.782817 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2"} Feb 13 20:49:19.782976 kubelet[3444]: E0213 20:49:19.782905 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f67249f5-a448-4596-816c-cb1a3d8e3628\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.782976 kubelet[3444]: E0213 20:49:19.782934 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f67249f5-a448-4596-816c-cb1a3d8e3628\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gsw2t" podUID="f67249f5-a448-4596-816c-cb1a3d8e3628" Feb 13 20:49:19.807139 containerd[1789]: time="2025-02-13T20:49:19.807073287Z" level=error msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" failed" error="failed to destroy network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.809093 kubelet[3444]: E0213 20:49:19.809047 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:19.809249 kubelet[3444]: E0213 20:49:19.809220 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522"} Feb 13 20:49:19.809316 kubelet[3444]: E0213 20:49:19.809271 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9bf7df7-0f43-4399-a9cf-00811b424924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.809745 kubelet[3444]: E0213 20:49:19.809310 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9bf7df7-0f43-4399-a9cf-00811b424924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8dm2w" podUID="d9bf7df7-0f43-4399-a9cf-00811b424924" Feb 13 20:49:19.823176 containerd[1789]: time="2025-02-13T20:49:19.823117475Z" level=error msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" failed" error="failed to destroy network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.823454 kubelet[3444]: E0213 20:49:19.823415 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:19.823734 kubelet[3444]: E0213 20:49:19.823468 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077"} Feb 13 20:49:19.823823 kubelet[3444]: E0213 20:49:19.823762 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.823823 kubelet[3444]: E0213 20:49:19.823795 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" podUID="29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88" Feb 13 20:49:19.835799 containerd[1789]: time="2025-02-13T20:49:19.835328341Z" level=error msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" failed" error="failed to destroy network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.835917 kubelet[3444]: E0213 20:49:19.835562 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:19.835917 kubelet[3444]: E0213 20:49:19.835628 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76"} Feb 13 20:49:19.835917 kubelet[3444]: E0213 20:49:19.835664 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c72f7b9-06eb-4b56-8496-0b119694b5cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.835917 kubelet[3444]: E0213 20:49:19.835692 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c72f7b9-06eb-4b56-8496-0b119694b5cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" podUID="0c72f7b9-06eb-4b56-8496-0b119694b5cc" Feb 13 20:49:19.837553 containerd[1789]: time="2025-02-13T20:49:19.837496153Z" level=error msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" failed" error="failed to destroy network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.837924 kubelet[3444]: E0213 20:49:19.837879 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:19.838049 kubelet[3444]: E0213 20:49:19.838031 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e"} Feb 13 20:49:19.838238 kubelet[3444]: E0213 20:49:19.838133 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61879ee1-72fa-4d45-9726-2a5c594597b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.838502 kubelet[3444]: E0213 20:49:19.838167 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61879ee1-72fa-4d45-9726-2a5c594597b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" podUID="61879ee1-72fa-4d45-9726-2a5c594597b2" Feb 13 20:49:19.839282 containerd[1789]: time="2025-02-13T20:49:19.839246163Z" level=error msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" failed" error="failed to destroy network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:49:19.839411 kubelet[3444]: E0213 20:49:19.839384 3444 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:19.839474 kubelet[3444]: E0213 20:49:19.839420 3444 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a"} Feb 13 20:49:19.839474 kubelet[3444]: E0213 20:49:19.839451 3444 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:49:19.839661 kubelet[3444]: E0213 20:49:19.839494 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdbc6e37-5802-45c9-b35d-a25b6e25224b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnpcs" podUID="bdbc6e37-5802-45c9-b35d-a25b6e25224b" Feb 13 20:49:25.640469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485931051.mount: Deactivated successfully. Feb 13 20:49:25.687105 containerd[1789]: time="2025-02-13T20:49:25.687056875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:25.688972 containerd[1789]: time="2025-02-13T20:49:25.688894285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:49:25.692839 containerd[1789]: time="2025-02-13T20:49:25.692787606Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:25.696237 containerd[1789]: time="2025-02-13T20:49:25.696188325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:25.696909 containerd[1789]: time="2025-02-13T20:49:25.696746728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.967405967s" Feb 13 20:49:25.696909 containerd[1789]: time="2025-02-13T20:49:25.696785128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:49:25.715651 containerd[1789]: time="2025-02-13T20:49:25.715618431Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:49:25.757363 containerd[1789]: time="2025-02-13T20:49:25.757324160Z" level=info msg="CreateContainer within sandbox \"94ff8a60f0d02313cd2c43d6afe82921e37e12d3d8469bbb3cf8d1b1f34866e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1d22313c154a3c09352f9f5b85a6ef150c1a74f895f1a0e5ce2e096c88c1e771\"" Feb 13 20:49:25.757852 containerd[1789]: time="2025-02-13T20:49:25.757755962Z" level=info msg="StartContainer for \"1d22313c154a3c09352f9f5b85a6ef150c1a74f895f1a0e5ce2e096c88c1e771\"" Feb 13 20:49:25.820207 containerd[1789]: time="2025-02-13T20:49:25.817562089Z" level=info msg="StartContainer for \"1d22313c154a3c09352f9f5b85a6ef150c1a74f895f1a0e5ce2e096c88c1e771\" returns successfully" Feb 13 20:49:25.917426 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:49:25.917555 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:49:26.769088 kubelet[3444]: I0213 20:49:26.769030 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p94ht" podStartSLOduration=1.605142222 podStartE2EDuration="20.769010021s" podCreationTimestamp="2025-02-13 20:49:06 +0000 UTC" firstStartedPulling="2025-02-13 20:49:06.533878035 +0000 UTC m=+24.062530788" lastFinishedPulling="2025-02-13 20:49:25.697745634 +0000 UTC m=+43.226398587" observedRunningTime="2025-02-13 20:49:26.767755314 +0000 UTC m=+44.296408067" watchObservedRunningTime="2025-02-13 20:49:26.769010021 +0000 UTC m=+44.297662874" Feb 13 20:49:30.571205 containerd[1789]: time="2025-02-13T20:49:30.569325924Z" level=info msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.635 [INFO][4706] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.635 [INFO][4706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" iface="eth0" netns="/var/run/netns/cni-e629c17a-469d-7f4d-31c2-4b57573aa1d8" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.635 [INFO][4706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" iface="eth0" netns="/var/run/netns/cni-e629c17a-469d-7f4d-31c2-4b57573aa1d8" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.636 [INFO][4706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" iface="eth0" netns="/var/run/netns/cni-e629c17a-469d-7f4d-31c2-4b57573aa1d8" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.636 [INFO][4706] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.636 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.655 [INFO][4713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.655 [INFO][4713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.655 [INFO][4713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.660 [WARNING][4713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.661 [INFO][4713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.662 [INFO][4713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:30.665530 containerd[1789]: 2025-02-13 20:49:30.664 [INFO][4706] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:30.669102 containerd[1789]: time="2025-02-13T20:49:30.665894742Z" level=info msg="TearDown network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" successfully" Feb 13 20:49:30.669102 containerd[1789]: time="2025-02-13T20:49:30.665928243Z" level=info msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" returns successfully" Feb 13 20:49:30.669102 containerd[1789]: time="2025-02-13T20:49:30.668504656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnpcs,Uid:bdbc6e37-5802-45c9-b35d-a25b6e25224b,Namespace:calico-system,Attempt:1,}" Feb 13 20:49:30.671377 systemd[1]: run-netns-cni\x2de629c17a\x2d469d\x2d7f4d\x2d31c2\x2d4b57573aa1d8.mount: Deactivated successfully. Feb 13 20:49:30.801516 systemd-networkd[1365]: cali854ddb0d59c: Link UP Feb 13 20:49:30.802865 systemd-networkd[1365]: cali854ddb0d59c: Gained carrier Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.724 [INFO][4720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.734 [INFO][4720] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0 csi-node-driver- calico-system bdbc6e37-5802-45c9-b35d-a25b6e25224b 736 0 2025-02-13 20:49:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 csi-node-driver-rnpcs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali854ddb0d59c [] []}} ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.734 [INFO][4720] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.757 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" HandleID="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.767 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" HandleID="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310b60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"csi-node-driver-rnpcs", "timestamp":"2025-02-13 20:49:30.757891936 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.767 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.767 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.767 [INFO][4730] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.769 [INFO][4730] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.771 [INFO][4730] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.774 [INFO][4730] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.776 [INFO][4730] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.777 [INFO][4730] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.777 [INFO][4730] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.778 [INFO][4730] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.783 [INFO][4730] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.790 [INFO][4730] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.129/26] block=192.168.9.128/26 handle="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.790 [INFO][4730] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.129/26] handle="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.790 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:30.820438 containerd[1789]: 2025-02-13 20:49:30.790 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.129/26] IPv6=[] ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" HandleID="k8s-pod-network.4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.792 [INFO][4720] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdbc6e37-5802-45c9-b35d-a25b6e25224b", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"csi-node-driver-rnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali854ddb0d59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.792 [INFO][4720] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.129/32] ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.792 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali854ddb0d59c ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.802 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.803 [INFO][4720] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdbc6e37-5802-45c9-b35d-a25b6e25224b", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb", Pod:"csi-node-driver-rnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali854ddb0d59c", MAC:"6a:4e:b2:1c:cd:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:30.822480 containerd[1789]: 2025-02-13 20:49:30.817 [INFO][4720] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb" Namespace="calico-system" Pod="csi-node-driver-rnpcs" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:30.842974 containerd[1789]: time="2025-02-13T20:49:30.842718092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:30.842974 containerd[1789]: time="2025-02-13T20:49:30.842791792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:30.842974 containerd[1789]: time="2025-02-13T20:49:30.842812192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:30.843300 containerd[1789]: time="2025-02-13T20:49:30.843089694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:30.889763 containerd[1789]: time="2025-02-13T20:49:30.889722344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnpcs,Uid:bdbc6e37-5802-45c9-b35d-a25b6e25224b,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb\"" Feb 13 20:49:30.891167 containerd[1789]: time="2025-02-13T20:49:30.891139752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:49:31.200035 kubelet[3444]: I0213 20:49:31.199904 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:49:31.568761 containerd[1789]: time="2025-02-13T20:49:31.568103486Z" level=info msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.616 [INFO][4856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.616 [INFO][4856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" iface="eth0" netns="/var/run/netns/cni-579e3719-9c59-f6fc-4f3d-9acf53bcf83e" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.616 [INFO][4856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" iface="eth0" netns="/var/run/netns/cni-579e3719-9c59-f6fc-4f3d-9acf53bcf83e" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.619 [INFO][4856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" iface="eth0" netns="/var/run/netns/cni-579e3719-9c59-f6fc-4f3d-9acf53bcf83e" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.619 [INFO][4856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.619 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.683 [INFO][4868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.684 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.684 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.694 [WARNING][4868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.694 [INFO][4868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.696 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:31.699310 containerd[1789]: 2025-02-13 20:49:31.697 [INFO][4856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:31.701777 containerd[1789]: time="2025-02-13T20:49:31.699442991Z" level=info msg="TearDown network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" successfully" Feb 13 20:49:31.701777 containerd[1789]: time="2025-02-13T20:49:31.699649092Z" level=info msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" returns successfully" Feb 13 20:49:31.701777 containerd[1789]: time="2025-02-13T20:49:31.700348296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5ff6cc45-sfbhx,Uid:0c72f7b9-06eb-4b56-8496-0b119694b5cc,Namespace:calico-system,Attempt:1,}" Feb 13 20:49:31.705230 systemd[1]: run-netns-cni\x2d579e3719\x2d9c59\x2df6fc\x2d4f3d\x2d9acf53bcf83e.mount: Deactivated successfully. Feb 13 20:49:31.841175 systemd-networkd[1365]: calicb2da6400b7: Link UP Feb 13 20:49:31.842750 systemd-networkd[1365]: calicb2da6400b7: Gained carrier Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.765 [INFO][4881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.773 [INFO][4881] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0 calico-kube-controllers-5d5ff6cc45- calico-system 0c72f7b9-06eb-4b56-8496-0b119694b5cc 748 0 2025-02-13 20:49:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d5ff6cc45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 calico-kube-controllers-5d5ff6cc45-sfbhx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb2da6400b7 [] []}} ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.773 [INFO][4881] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.800 [INFO][4892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" HandleID="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.807 [INFO][4892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" HandleID="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"calico-kube-controllers-5d5ff6cc45-sfbhx", "timestamp":"2025-02-13 20:49:31.800489934 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.807 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.807 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.807 [INFO][4892] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.808 [INFO][4892] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.812 [INFO][4892] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.815 [INFO][4892] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.816 [INFO][4892] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.820 [INFO][4892] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.820 [INFO][4892] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.823 [INFO][4892] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886 Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.831 [INFO][4892] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.836 [INFO][4892] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.130/26] block=192.168.9.128/26 handle="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.836 [INFO][4892] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.130/26] handle="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.836 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:31.859633 containerd[1789]: 2025-02-13 20:49:31.836 [INFO][4892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.130/26] IPv6=[] ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" HandleID="k8s-pod-network.0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.838 [INFO][4881] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0", GenerateName:"calico-kube-controllers-5d5ff6cc45-", Namespace:"calico-system", SelfLink:"", UID:"0c72f7b9-06eb-4b56-8496-0b119694b5cc", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5ff6cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"calico-kube-controllers-5d5ff6cc45-sfbhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2da6400b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.838 [INFO][4881] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.130/32] ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.838 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb2da6400b7 ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.842 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.842 [INFO][4881] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0", GenerateName:"calico-kube-controllers-5d5ff6cc45-", Namespace:"calico-system", SelfLink:"", UID:"0c72f7b9-06eb-4b56-8496-0b119694b5cc", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5ff6cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886", Pod:"calico-kube-controllers-5d5ff6cc45-sfbhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2da6400b7", MAC:"f6:02:2d:94:f3:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:31.860949 containerd[1789]: 2025-02-13 20:49:31.858 [INFO][4881] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886" Namespace="calico-system" Pod="calico-kube-controllers-5d5ff6cc45-sfbhx" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:31.881548 containerd[1789]: time="2025-02-13T20:49:31.881251967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:31.881548 containerd[1789]: time="2025-02-13T20:49:31.881332568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:31.881548 containerd[1789]: time="2025-02-13T20:49:31.881369468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:31.881548 containerd[1789]: time="2025-02-13T20:49:31.881469169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:31.934431 systemd-networkd[1365]: cali854ddb0d59c: Gained IPv6LL Feb 13 20:49:31.936361 containerd[1789]: time="2025-02-13T20:49:31.936335463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5ff6cc45-sfbhx,Uid:0c72f7b9-06eb-4b56-8496-0b119694b5cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886\"" Feb 13 20:49:31.947938 kubelet[3444]: I0213 20:49:31.947712 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:49:32.314475 containerd[1789]: time="2025-02-13T20:49:32.314419993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:32.317550 containerd[1789]: time="2025-02-13T20:49:32.317420509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:49:32.322375 containerd[1789]: time="2025-02-13T20:49:32.322321536Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:32.326901 containerd[1789]: time="2025-02-13T20:49:32.326871660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:32.327961 containerd[1789]: time="2025-02-13T20:49:32.327554364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.436113111s" Feb 13 20:49:32.327961 containerd[1789]: time="2025-02-13T20:49:32.327590764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:49:32.328683 containerd[1789]: time="2025-02-13T20:49:32.328660070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:49:32.330280 containerd[1789]: time="2025-02-13T20:49:32.330140277Z" level=info msg="CreateContainer within sandbox \"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:49:32.367771 containerd[1789]: time="2025-02-13T20:49:32.367726479Z" level=info msg="CreateContainer within sandbox \"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"42bf235a940aa7c00a7ee51eff244c9d19271bd3feaf32c9046798d8d58ea96a\"" Feb 13 20:49:32.368331 containerd[1789]: time="2025-02-13T20:49:32.368291982Z" level=info msg="StartContainer for \"42bf235a940aa7c00a7ee51eff244c9d19271bd3feaf32c9046798d8d58ea96a\"" Feb 13 20:49:32.419801 containerd[1789]: time="2025-02-13T20:49:32.419708558Z" level=info msg="StartContainer for \"42bf235a940aa7c00a7ee51eff244c9d19271bd3feaf32c9046798d8d58ea96a\" returns successfully" Feb 13 20:49:32.577723 containerd[1789]: time="2025-02-13T20:49:32.575127893Z" level=info msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" Feb 13 20:49:32.584173 containerd[1789]: time="2025-02-13T20:49:32.580801823Z" level=info msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" Feb 13 20:49:32.665483 kernel: bpftool[5046]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.708 [INFO][5029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.708 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" iface="eth0" netns="/var/run/netns/cni-c8021f93-ad79-5b4d-8efb-5c21c970bb17" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.708 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" iface="eth0" netns="/var/run/netns/cni-c8021f93-ad79-5b4d-8efb-5c21c970bb17" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.713 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" iface="eth0" netns="/var/run/netns/cni-c8021f93-ad79-5b4d-8efb-5c21c970bb17" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.713 [INFO][5029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.713 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.794 [INFO][5057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.794 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.794 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.812 [WARNING][5057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.812 [INFO][5057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.815 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:32.825357 containerd[1789]: 2025-02-13 20:49:32.820 [INFO][5029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:32.831669 containerd[1789]: time="2025-02-13T20:49:32.831170267Z" level=info msg="TearDown network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" successfully" Feb 13 20:49:32.831669 containerd[1789]: time="2025-02-13T20:49:32.831488969Z" level=info msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" returns successfully" Feb 13 20:49:32.837223 containerd[1789]: time="2025-02-13T20:49:32.836798998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-tcxq9,Uid:61879ee1-72fa-4d45-9726-2a5c594597b2,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:49:32.841996 systemd[1]: run-netns-cni\x2dc8021f93\x2dad79\x2d5b4d\x2d8efb\x2d5c21c970bb17.mount: Deactivated successfully. Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.749 [INFO][5028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.750 [INFO][5028] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" iface="eth0" netns="/var/run/netns/cni-bba5d7e8-df23-609e-d471-f6ed91582954" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.750 [INFO][5028] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" iface="eth0" netns="/var/run/netns/cni-bba5d7e8-df23-609e-d471-f6ed91582954" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.750 [INFO][5028] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" iface="eth0" netns="/var/run/netns/cni-bba5d7e8-df23-609e-d471-f6ed91582954" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.751 [INFO][5028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.751 [INFO][5028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.860 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.860 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.861 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.877 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.877 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.878 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:32.902301 containerd[1789]: 2025-02-13 20:49:32.892 [INFO][5028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:32.910079 containerd[1789]: time="2025-02-13T20:49:32.903353055Z" level=info msg="TearDown network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" successfully" Feb 13 20:49:32.910079 containerd[1789]: time="2025-02-13T20:49:32.903972058Z" level=info msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" returns successfully" Feb 13 20:49:32.912855 systemd[1]: run-netns-cni\x2dbba5d7e8\x2ddf23\x2d609e\x2dd471\x2df6ed91582954.mount: Deactivated successfully. Feb 13 20:49:32.916139 containerd[1789]: time="2025-02-13T20:49:32.914835717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-vthz8,Uid:29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:49:33.175132 systemd-networkd[1365]: cali384388b40af: Link UP Feb 13 20:49:33.177437 systemd-networkd[1365]: cali384388b40af: Gained carrier Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.002 [INFO][5077] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0 calico-apiserver-55b4cbcf97- calico-apiserver 61879ee1-72fa-4d45-9726-2a5c594597b2 766 0 2025-02-13 20:49:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b4cbcf97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 calico-apiserver-55b4cbcf97-tcxq9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali384388b40af [] []}} ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.002 [INFO][5077] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.109 [INFO][5117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" HandleID="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.124 [INFO][5117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" HandleID="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003193c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"calico-apiserver-55b4cbcf97-tcxq9", "timestamp":"2025-02-13 20:49:33.109937264 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.125 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.125 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.125 [INFO][5117] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.129 [INFO][5117] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.133 [INFO][5117] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.137 [INFO][5117] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.140 [INFO][5117] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.143 [INFO][5117] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.143 [INFO][5117] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.145 [INFO][5117] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307 Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.152 [INFO][5117] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.167 [INFO][5117] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.131/26] block=192.168.9.128/26 handle="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.167 [INFO][5117] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.131/26] handle="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.168 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:33.202023 containerd[1789]: 2025-02-13 20:49:33.168 [INFO][5117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.131/26] IPv6=[] ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" HandleID="k8s-pod-network.5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.170 [INFO][5077] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"61879ee1-72fa-4d45-9726-2a5c594597b2", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"calico-apiserver-55b4cbcf97-tcxq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384388b40af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.170 [INFO][5077] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.131/32] ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.170 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali384388b40af ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.175 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.176 [INFO][5077] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"61879ee1-72fa-4d45-9726-2a5c594597b2", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307", Pod:"calico-apiserver-55b4cbcf97-tcxq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384388b40af", MAC:"ee:a0:cc:06:0b:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.203862 containerd[1789]: 2025-02-13 20:49:33.198 [INFO][5077] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-tcxq9" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:33.262108 containerd[1789]: time="2025-02-13T20:49:33.261587078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:33.262108 containerd[1789]: time="2025-02-13T20:49:33.261661679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:33.262108 containerd[1789]: time="2025-02-13T20:49:33.261683379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:33.268749 containerd[1789]: time="2025-02-13T20:49:33.267049008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:33.277363 systemd-networkd[1365]: cali77159f7d935: Link UP Feb 13 20:49:33.277607 systemd-networkd[1365]: cali77159f7d935: Gained carrier Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.070 [INFO][5105] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0 calico-apiserver-55b4cbcf97- calico-apiserver 29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88 767 0 2025-02-13 20:49:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b4cbcf97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 calico-apiserver-55b4cbcf97-vthz8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77159f7d935 [] []}} ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.072 [INFO][5105] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.170 [INFO][5122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" HandleID="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.199 [INFO][5122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" HandleID="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"calico-apiserver-55b4cbcf97-vthz8", "timestamp":"2025-02-13 20:49:33.170089487 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.201 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.201 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.201 [INFO][5122] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.206 [INFO][5122] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.217 [INFO][5122] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.223 [INFO][5122] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.229 [INFO][5122] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.233 [INFO][5122] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.233 [INFO][5122] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.235 [INFO][5122] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.242 [INFO][5122] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.259 [INFO][5122] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.132/26] block=192.168.9.128/26 handle="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.259 [INFO][5122] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.132/26] handle="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.259 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:33.297474 containerd[1789]: 2025-02-13 20:49:33.259 [INFO][5122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.132/26] IPv6=[] ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" HandleID="k8s-pod-network.028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.268 [INFO][5105] cni-plugin/k8s.go 386: Populated endpoint ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"calico-apiserver-55b4cbcf97-vthz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77159f7d935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.270 [INFO][5105] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.132/32] ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.270 [INFO][5105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77159f7d935 ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.273 [INFO][5105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.274 [INFO][5105] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef", Pod:"calico-apiserver-55b4cbcf97-vthz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77159f7d935", MAC:"52:88:1f:89:b6:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.298449 containerd[1789]: 2025-02-13 20:49:33.293 [INFO][5105] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef" Namespace="calico-apiserver" Pod="calico-apiserver-55b4cbcf97-vthz8" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:33.317087 systemd-networkd[1365]: vxlan.calico: Link UP Feb 13 20:49:33.317096 systemd-networkd[1365]: vxlan.calico: Gained carrier Feb 13 20:49:33.357044 containerd[1789]: time="2025-02-13T20:49:33.356071286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:33.357044 containerd[1789]: time="2025-02-13T20:49:33.356136186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:33.357044 containerd[1789]: time="2025-02-13T20:49:33.356158786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:33.357044 containerd[1789]: time="2025-02-13T20:49:33.356282987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:33.450128 containerd[1789]: time="2025-02-13T20:49:33.449993690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-tcxq9,Uid:61879ee1-72fa-4d45-9726-2a5c594597b2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307\"" Feb 13 20:49:33.474650 containerd[1789]: time="2025-02-13T20:49:33.474379221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4cbcf97-vthz8,Uid:29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef\"" Feb 13 20:49:33.570233 containerd[1789]: time="2025-02-13T20:49:33.569711533Z" level=info msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.652 [INFO][5278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.652 [INFO][5278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" iface="eth0" netns="/var/run/netns/cni-1a1f930e-a95d-3ec4-e0fa-26d87f942c6d" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.652 [INFO][5278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" iface="eth0" netns="/var/run/netns/cni-1a1f930e-a95d-3ec4-e0fa-26d87f942c6d" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.653 [INFO][5278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" iface="eth0" netns="/var/run/netns/cni-1a1f930e-a95d-3ec4-e0fa-26d87f942c6d" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.653 [INFO][5278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.653 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.690 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.690 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.691 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.697 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.697 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.698 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:33.700778 containerd[1789]: 2025-02-13 20:49:33.699 [INFO][5278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:33.704190 containerd[1789]: time="2025-02-13T20:49:33.702515146Z" level=info msg="TearDown network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" successfully" Feb 13 20:49:33.704190 containerd[1789]: time="2025-02-13T20:49:33.702563646Z" level=info msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" returns successfully" Feb 13 20:49:33.704466 containerd[1789]: time="2025-02-13T20:49:33.704436356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsw2t,Uid:f67249f5-a448-4596-816c-cb1a3d8e3628,Namespace:kube-system,Attempt:1,}" Feb 13 20:49:33.707058 systemd[1]: run-netns-cni\x2d1a1f930e\x2da95d\x2d3ec4\x2de0fa\x2d26d87f942c6d.mount: Deactivated successfully. Feb 13 20:49:33.855337 systemd-networkd[1365]: calicb2da6400b7: Gained IPv6LL Feb 13 20:49:33.942813 systemd-networkd[1365]: cali5f93c128864: Link UP Feb 13 20:49:33.944473 systemd-networkd[1365]: cali5f93c128864: Gained carrier Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.830 [INFO][5308] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0 coredns-7db6d8ff4d- kube-system f67249f5-a448-4596-816c-cb1a3d8e3628 780 0 2025-02-13 20:48:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 coredns-7db6d8ff4d-gsw2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f93c128864 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.831 [INFO][5308] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.890 [INFO][5337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" HandleID="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.902 [INFO][5337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" HandleID="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"coredns-7db6d8ff4d-gsw2t", "timestamp":"2025-02-13 20:49:33.890128753 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.902 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.902 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.903 [INFO][5337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.904 [INFO][5337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.908 [INFO][5337] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.913 [INFO][5337] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.915 [INFO][5337] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.918 [INFO][5337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.918 [INFO][5337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.919 [INFO][5337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.925 [INFO][5337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.935 [INFO][5337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.133/26] block=192.168.9.128/26 handle="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.935 [INFO][5337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.133/26] handle="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.935 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:33.984220 containerd[1789]: 2025-02-13 20:49:33.935 [INFO][5337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.133/26] IPv6=[] ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" HandleID="k8s-pod-network.3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.937 [INFO][5308] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f67249f5-a448-4596-816c-cb1a3d8e3628", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"coredns-7db6d8ff4d-gsw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f93c128864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.938 [INFO][5308] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.133/32] ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.938 [INFO][5308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f93c128864 ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.945 [INFO][5308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.947 [INFO][5308] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f67249f5-a448-4596-816c-cb1a3d8e3628", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e", Pod:"coredns-7db6d8ff4d-gsw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f93c128864", MAC:"8e:6c:c3:84:8e:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:33.988477 containerd[1789]: 2025-02-13 20:49:33.980 [INFO][5308] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsw2t" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:34.073356 containerd[1789]: time="2025-02-13T20:49:34.072411724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:34.076229 containerd[1789]: time="2025-02-13T20:49:34.074346033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:34.076229 containerd[1789]: time="2025-02-13T20:49:34.075020237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:34.076229 containerd[1789]: time="2025-02-13T20:49:34.075118537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:34.166162 containerd[1789]: time="2025-02-13T20:49:34.166048596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsw2t,Uid:f67249f5-a448-4596-816c-cb1a3d8e3628,Namespace:kube-system,Attempt:1,} returns sandbox id \"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e\"" Feb 13 20:49:34.171368 containerd[1789]: time="2025-02-13T20:49:34.171127221Z" level=info msg="CreateContainer within sandbox \"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:49:34.450601 containerd[1789]: time="2025-02-13T20:49:34.450551730Z" level=info msg="CreateContainer within sandbox \"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1342970a3ae2a76e1d19ec2739dff0686454b7190945950f7dfb75a76676186e\"" Feb 13 20:49:34.453116 containerd[1789]: time="2025-02-13T20:49:34.452679641Z" level=info msg="StartContainer for \"1342970a3ae2a76e1d19ec2739dff0686454b7190945950f7dfb75a76676186e\"" Feb 13 20:49:34.533550 containerd[1789]: time="2025-02-13T20:49:34.533457548Z" level=info msg="StartContainer for \"1342970a3ae2a76e1d19ec2739dff0686454b7190945950f7dfb75a76676186e\" returns successfully" Feb 13 20:49:34.571896 containerd[1789]: time="2025-02-13T20:49:34.570630336Z" level=info msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.652 [INFO][5461] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.652 [INFO][5461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" iface="eth0" netns="/var/run/netns/cni-57edfee9-cded-3ed8-a085-f2a57542bde7" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.654 [INFO][5461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" iface="eth0" netns="/var/run/netns/cni-57edfee9-cded-3ed8-a085-f2a57542bde7" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.654 [INFO][5461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" iface="eth0" netns="/var/run/netns/cni-57edfee9-cded-3ed8-a085-f2a57542bde7" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.655 [INFO][5461] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.655 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.699 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.699 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.700 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.708 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.708 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.709 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:34.712982 containerd[1789]: 2025-02-13 20:49:34.711 [INFO][5461] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:34.717068 containerd[1789]: time="2025-02-13T20:49:34.716135769Z" level=info msg="TearDown network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" successfully" Feb 13 20:49:34.717068 containerd[1789]: time="2025-02-13T20:49:34.716225470Z" level=info msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" returns successfully" Feb 13 20:49:34.718402 containerd[1789]: time="2025-02-13T20:49:34.718372781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dm2w,Uid:d9bf7df7-0f43-4399-a9cf-00811b424924,Namespace:kube-system,Attempt:1,}" Feb 13 20:49:34.721116 systemd[1]: run-netns-cni\x2d57edfee9\x2dcded\x2d3ed8\x2da085\x2df2a57542bde7.mount: Deactivated successfully. Feb 13 20:49:34.814850 systemd-networkd[1365]: cali77159f7d935: Gained IPv6LL Feb 13 20:49:34.835374 kubelet[3444]: I0213 20:49:34.835312 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gsw2t" podStartSLOduration=37.83528937 podStartE2EDuration="37.83528937s" podCreationTimestamp="2025-02-13 20:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:49:34.833489061 +0000 UTC m=+52.362141914" watchObservedRunningTime="2025-02-13 20:49:34.83528937 +0000 UTC m=+52.363942223" Feb 13 20:49:34.976910 systemd-networkd[1365]: cali3f3705ace69: Link UP Feb 13 20:49:34.979256 systemd-networkd[1365]: cali3f3705ace69: Gained carrier Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.832 [INFO][5475] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0 coredns-7db6d8ff4d- kube-system d9bf7df7-0f43-4399-a9cf-00811b424924 789 0 2025-02-13 20:48:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-faf44fbcb5 coredns-7db6d8ff4d-8dm2w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f3705ace69 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.832 [INFO][5475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.920 [INFO][5487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" HandleID="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.932 [INFO][5487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" HandleID="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-faf44fbcb5", "pod":"coredns-7db6d8ff4d-8dm2w", "timestamp":"2025-02-13 20:49:34.919825197 +0000 UTC"}, Hostname:"ci-4081.3.1-a-faf44fbcb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.932 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.932 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.932 [INFO][5487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-faf44fbcb5' Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.934 [INFO][5487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.939 [INFO][5487] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.945 [INFO][5487] ipam/ipam.go 489: Trying affinity for 192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.947 [INFO][5487] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.950 [INFO][5487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.950 [INFO][5487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.952 [INFO][5487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1 Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.961 [INFO][5487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.970 [INFO][5487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.134/26] block=192.168.9.128/26 handle="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.970 [INFO][5487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.134/26] handle="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" host="ci-4081.3.1-a-faf44fbcb5" Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.970 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:35.001612 containerd[1789]: 2025-02-13 20:49:34.970 [INFO][5487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.134/26] IPv6=[] ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" HandleID="k8s-pod-network.07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.973 [INFO][5475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d9bf7df7-0f43-4399-a9cf-00811b424924", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"", Pod:"coredns-7db6d8ff4d-8dm2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f3705ace69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.973 [INFO][5475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.134/32] ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.974 [INFO][5475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f3705ace69 ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.976 [INFO][5475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.976 [INFO][5475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d9bf7df7-0f43-4399-a9cf-00811b424924", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1", Pod:"coredns-7db6d8ff4d-8dm2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f3705ace69", MAC:"26:5e:1f:6d:90:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:35.004627 containerd[1789]: 2025-02-13 20:49:34.998 [INFO][5475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dm2w" WorkloadEndpoint="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:35.047199 containerd[1789]: time="2025-02-13T20:49:35.046953138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:49:35.047199 containerd[1789]: time="2025-02-13T20:49:35.047019038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:49:35.047199 containerd[1789]: time="2025-02-13T20:49:35.047046338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:35.048361 containerd[1789]: time="2025-02-13T20:49:35.047322439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:49:35.070582 systemd-networkd[1365]: cali384388b40af: Gained IPv6LL Feb 13 20:49:35.073859 systemd-networkd[1365]: cali5f93c128864: Gained IPv6LL Feb 13 20:49:35.125482 containerd[1789]: time="2025-02-13T20:49:35.125441933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dm2w,Uid:d9bf7df7-0f43-4399-a9cf-00811b424924,Namespace:kube-system,Attempt:1,} returns sandbox id \"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1\"" Feb 13 20:49:35.132623 containerd[1789]: time="2025-02-13T20:49:35.132479069Z" level=info msg="CreateContainer within sandbox \"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:49:35.170300 containerd[1789]: time="2025-02-13T20:49:35.170195759Z" level=info msg="CreateContainer within sandbox \"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"661e17b501d896fc7f82c353c4067d2d2cfbab32a11399df5e834f33f1f404cf\"" Feb 13 20:49:35.172069 containerd[1789]: time="2025-02-13T20:49:35.170957663Z" level=info msg="StartContainer for \"661e17b501d896fc7f82c353c4067d2d2cfbab32a11399df5e834f33f1f404cf\"" Feb 13 20:49:35.199256 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Feb 13 20:49:35.255313 containerd[1789]: time="2025-02-13T20:49:35.254437584Z" level=info msg="StartContainer for \"661e17b501d896fc7f82c353c4067d2d2cfbab32a11399df5e834f33f1f404cf\" returns successfully" Feb 13 20:49:35.476214 containerd[1789]: time="2025-02-13T20:49:35.476149802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:35.478041 containerd[1789]: time="2025-02-13T20:49:35.477990311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:49:35.481769 containerd[1789]: time="2025-02-13T20:49:35.481716630Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:35.486495 containerd[1789]: time="2025-02-13T20:49:35.486400953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:35.487046 containerd[1789]: time="2025-02-13T20:49:35.487011356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.158080085s" Feb 13 20:49:35.487129 containerd[1789]: time="2025-02-13T20:49:35.487054557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:49:35.489568 containerd[1789]: time="2025-02-13T20:49:35.488495664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:49:35.495670 containerd[1789]: time="2025-02-13T20:49:35.495642400Z" level=info msg="CreateContainer within sandbox \"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:49:35.528637 containerd[1789]: time="2025-02-13T20:49:35.528387665Z" level=info msg="CreateContainer within sandbox \"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"12ef42c470a3da6fe4024191c728f3197aeaf117af3eae45009cb726ecbd3a48\"" Feb 13 20:49:35.529803 containerd[1789]: time="2025-02-13T20:49:35.529774972Z" level=info msg="StartContainer for \"12ef42c470a3da6fe4024191c728f3197aeaf117af3eae45009cb726ecbd3a48\"" Feb 13 20:49:35.597158 containerd[1789]: time="2025-02-13T20:49:35.597034711Z" level=info msg="StartContainer for \"12ef42c470a3da6fe4024191c728f3197aeaf117af3eae45009cb726ecbd3a48\" returns successfully" Feb 13 20:49:35.837822 kubelet[3444]: I0213 20:49:35.837381 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d5ff6cc45-sfbhx" podStartSLOduration=26.286945432 podStartE2EDuration="29.837355223s" podCreationTimestamp="2025-02-13 20:49:06 +0000 UTC" firstStartedPulling="2025-02-13 20:49:31.937787071 +0000 UTC m=+49.466439824" lastFinishedPulling="2025-02-13 20:49:35.488196862 +0000 UTC m=+53.016849615" observedRunningTime="2025-02-13 20:49:35.832559799 +0000 UTC m=+53.361212552" watchObservedRunningTime="2025-02-13 20:49:35.837355223 +0000 UTC m=+53.366007976" Feb 13 20:49:35.904113 kubelet[3444]: I0213 20:49:35.904040 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8dm2w" podStartSLOduration=38.904018359 podStartE2EDuration="38.904018359s" podCreationTimestamp="2025-02-13 20:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:49:35.860702841 +0000 UTC m=+53.389355694" watchObservedRunningTime="2025-02-13 20:49:35.904018359 +0000 UTC m=+53.432671112" Feb 13 20:49:36.094445 systemd-networkd[1365]: cali3f3705ace69: Gained IPv6LL Feb 13 20:49:37.094115 containerd[1789]: time="2025-02-13T20:49:37.094064260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:37.096676 containerd[1789]: time="2025-02-13T20:49:37.096609773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:49:37.104215 containerd[1789]: time="2025-02-13T20:49:37.103982610Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:37.109673 containerd[1789]: time="2025-02-13T20:49:37.109219536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:37.110900 containerd[1789]: time="2025-02-13T20:49:37.110741844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.62221108s" Feb 13 20:49:37.110900 containerd[1789]: time="2025-02-13T20:49:37.110786444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:49:37.113848 containerd[1789]: time="2025-02-13T20:49:37.113301357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:49:37.114404 containerd[1789]: time="2025-02-13T20:49:37.114369462Z" level=info msg="CreateContainer within sandbox \"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:49:37.146714 containerd[1789]: time="2025-02-13T20:49:37.146670725Z" level=info msg="CreateContainer within sandbox \"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"11db00bdd9bb67e51c9eea84f6a884def0e3ba4137c8003962fd9bf6fb81e951\"" Feb 13 20:49:37.147307 containerd[1789]: time="2025-02-13T20:49:37.147280828Z" level=info msg="StartContainer for \"11db00bdd9bb67e51c9eea84f6a884def0e3ba4137c8003962fd9bf6fb81e951\"" Feb 13 20:49:37.218593 containerd[1789]: time="2025-02-13T20:49:37.218544588Z" level=info msg="StartContainer for \"11db00bdd9bb67e51c9eea84f6a884def0e3ba4137c8003962fd9bf6fb81e951\" returns successfully" Feb 13 20:49:37.686034 kubelet[3444]: I0213 20:49:37.685574 3444 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:49:37.686034 kubelet[3444]: I0213 20:49:37.685615 3444 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:49:37.837210 kubelet[3444]: I0213 20:49:37.836917 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rnpcs" podStartSLOduration=25.615365504 podStartE2EDuration="31.836893406s" podCreationTimestamp="2025-02-13 20:49:06 +0000 UTC" firstStartedPulling="2025-02-13 20:49:30.89087965 +0000 UTC m=+48.419532403" lastFinishedPulling="2025-02-13 20:49:37.112407552 +0000 UTC m=+54.641060305" observedRunningTime="2025-02-13 20:49:37.836525804 +0000 UTC m=+55.365178557" watchObservedRunningTime="2025-02-13 20:49:37.836893406 +0000 UTC m=+55.365546159" Feb 13 20:49:39.573386 containerd[1789]: time="2025-02-13T20:49:39.573332361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:39.575226 containerd[1789]: time="2025-02-13T20:49:39.575146170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:49:39.577467 containerd[1789]: time="2025-02-13T20:49:39.577429582Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:39.582578 containerd[1789]: time="2025-02-13T20:49:39.582509508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:39.583385 containerd[1789]: time="2025-02-13T20:49:39.583255711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.469910854s" Feb 13 20:49:39.583385 containerd[1789]: time="2025-02-13T20:49:39.583292612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:49:39.585009 containerd[1789]: time="2025-02-13T20:49:39.584630618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:49:39.586438 containerd[1789]: time="2025-02-13T20:49:39.586281227Z" level=info msg="CreateContainer within sandbox \"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:49:39.618109 containerd[1789]: time="2025-02-13T20:49:39.618079787Z" level=info msg="CreateContainer within sandbox \"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1704a6dd9c7308c0c4652b29b3eccde54685906a8086a689d268b4210de7b29b\"" Feb 13 20:49:39.619916 containerd[1789]: time="2025-02-13T20:49:39.618562789Z" level=info msg="StartContainer for \"1704a6dd9c7308c0c4652b29b3eccde54685906a8086a689d268b4210de7b29b\"" Feb 13 20:49:39.697457 containerd[1789]: time="2025-02-13T20:49:39.697315786Z" level=info msg="StartContainer for \"1704a6dd9c7308c0c4652b29b3eccde54685906a8086a689d268b4210de7b29b\" returns successfully" Feb 13 20:49:39.942244 containerd[1789]: time="2025-02-13T20:49:39.941334717Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:49:39.945891 containerd[1789]: time="2025-02-13T20:49:39.945837040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:49:39.947943 containerd[1789]: time="2025-02-13T20:49:39.947901850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 363.236632ms" Feb 13 20:49:39.948040 containerd[1789]: time="2025-02-13T20:49:39.947948050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:49:39.951796 containerd[1789]: time="2025-02-13T20:49:39.951767170Z" level=info msg="CreateContainer within sandbox \"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:49:39.982573 containerd[1789]: time="2025-02-13T20:49:39.982524725Z" level=info msg="CreateContainer within sandbox \"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c5d006c5df21a8e607dde41c5e7362a7169e2886043ef9897fc3b5120fafbb0\"" Feb 13 20:49:39.983933 containerd[1789]: time="2025-02-13T20:49:39.983897232Z" level=info msg="StartContainer for \"7c5d006c5df21a8e607dde41c5e7362a7169e2886043ef9897fc3b5120fafbb0\"" Feb 13 20:49:40.110823 containerd[1789]: time="2025-02-13T20:49:40.110774571Z" level=info msg="StartContainer for \"7c5d006c5df21a8e607dde41c5e7362a7169e2886043ef9897fc3b5120fafbb0\" returns successfully" Feb 13 20:49:40.857147 kubelet[3444]: I0213 20:49:40.857083 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55b4cbcf97-tcxq9" podStartSLOduration=29.72510512 podStartE2EDuration="35.857060234s" podCreationTimestamp="2025-02-13 20:49:05 +0000 UTC" firstStartedPulling="2025-02-13 20:49:33.452499003 +0000 UTC m=+50.981151856" lastFinishedPulling="2025-02-13 20:49:39.584454117 +0000 UTC m=+57.113106970" observedRunningTime="2025-02-13 20:49:39.851536264 +0000 UTC m=+57.380189017" watchObservedRunningTime="2025-02-13 20:49:40.857060234 +0000 UTC m=+58.385713087" Feb 13 20:49:40.918540 kubelet[3444]: I0213 20:49:40.918478 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55b4cbcf97-vthz8" podStartSLOduration=29.445405318 podStartE2EDuration="35.918455744s" podCreationTimestamp="2025-02-13 20:49:05 +0000 UTC" firstStartedPulling="2025-02-13 20:49:33.475684528 +0000 UTC m=+51.004337281" lastFinishedPulling="2025-02-13 20:49:39.948734954 +0000 UTC m=+57.477387707" observedRunningTime="2025-02-13 20:49:40.858584042 +0000 UTC m=+58.387236795" watchObservedRunningTime="2025-02-13 20:49:40.918455744 +0000 UTC m=+58.447108597" Feb 13 20:49:41.843506 kubelet[3444]: I0213 20:49:41.843451 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:49:42.585216 containerd[1789]: time="2025-02-13T20:49:42.585159062Z" level=info msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.616 [WARNING][5791] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d9bf7df7-0f43-4399-a9cf-00811b424924", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1", Pod:"coredns-7db6d8ff4d-8dm2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f3705ace69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.616 [INFO][5791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.616 [INFO][5791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" iface="eth0" netns="" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.616 [INFO][5791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.616 [INFO][5791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.635 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.636 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.636 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.642 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.642 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.643 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.646976 containerd[1789]: 2025-02-13 20:49:42.645 [INFO][5791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.647627 containerd[1789]: time="2025-02-13T20:49:42.647000675Z" level=info msg="TearDown network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" successfully" Feb 13 20:49:42.647627 containerd[1789]: time="2025-02-13T20:49:42.647028475Z" level=info msg="StopPodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" returns successfully" Feb 13 20:49:42.649469 containerd[1789]: time="2025-02-13T20:49:42.648461683Z" level=info msg="RemovePodSandbox for \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" Feb 13 20:49:42.649469 containerd[1789]: time="2025-02-13T20:49:42.648502483Z" level=info msg="Forcibly stopping sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\"" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.695 [WARNING][5816] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d9bf7df7-0f43-4399-a9cf-00811b424924", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"07389ea5922cd869df5565075e49a2816cd8851b3e1b48f423ddeffc439a72d1", Pod:"coredns-7db6d8ff4d-8dm2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f3705ace69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.696 [INFO][5816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.696 [INFO][5816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" iface="eth0" netns="" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.696 [INFO][5816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.696 [INFO][5816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.715 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.715 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.715 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.720 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.720 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" HandleID="k8s-pod-network.1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--8dm2w-eth0" Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.721 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.723627 containerd[1789]: 2025-02-13 20:49:42.722 [INFO][5816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522" Feb 13 20:49:42.724439 containerd[1789]: time="2025-02-13T20:49:42.723667964Z" level=info msg="TearDown network for sandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" successfully" Feb 13 20:49:42.730336 containerd[1789]: time="2025-02-13T20:49:42.730285597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:42.730452 containerd[1789]: time="2025-02-13T20:49:42.730398798Z" level=info msg="RemovePodSandbox \"1edcba475dd4b62c574e1866410f97e67b9f47878b1a231b74f6ef8055f99522\" returns successfully" Feb 13 20:49:42.731031 containerd[1789]: time="2025-02-13T20:49:42.730998101Z" level=info msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.762 [WARNING][5840] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f67249f5-a448-4596-816c-cb1a3d8e3628", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e", Pod:"coredns-7db6d8ff4d-gsw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f93c128864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.762 [INFO][5840] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.762 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" iface="eth0" netns="" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.762 [INFO][5840] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.762 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.780 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.781 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.781 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.786 [WARNING][5847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.786 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.788 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.790033 containerd[1789]: 2025-02-13 20:49:42.789 [INFO][5840] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.790717 containerd[1789]: time="2025-02-13T20:49:42.790039200Z" level=info msg="TearDown network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" successfully" Feb 13 20:49:42.790717 containerd[1789]: time="2025-02-13T20:49:42.790069500Z" level=info msg="StopPodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" returns successfully" Feb 13 20:49:42.790717 containerd[1789]: time="2025-02-13T20:49:42.790682503Z" level=info msg="RemovePodSandbox for \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" Feb 13 20:49:42.790717 containerd[1789]: time="2025-02-13T20:49:42.790713404Z" level=info msg="Forcibly stopping sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\"" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.825 [WARNING][5865] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f67249f5-a448-4596-816c-cb1a3d8e3628", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"3261ceaf54415fa7bcf298204e0e53765da1ab69f5249e0b35f155b5bb39335e", Pod:"coredns-7db6d8ff4d-gsw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f93c128864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.825 [INFO][5865] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.825 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" iface="eth0" netns="" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.825 [INFO][5865] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.825 [INFO][5865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.843 [INFO][5872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.843 [INFO][5872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.843 [INFO][5872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.850 [WARNING][5872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.850 [INFO][5872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" HandleID="k8s-pod-network.7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-coredns--7db6d8ff4d--gsw2t-eth0" Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.852 [INFO][5872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.854223 containerd[1789]: 2025-02-13 20:49:42.852 [INFO][5865] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2" Feb 13 20:49:42.854223 containerd[1789]: time="2025-02-13T20:49:42.853877224Z" level=info msg="TearDown network for sandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" successfully" Feb 13 20:49:42.861235 containerd[1789]: time="2025-02-13T20:49:42.861196761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:42.861475 containerd[1789]: time="2025-02-13T20:49:42.861264361Z" level=info msg="RemovePodSandbox \"7a84cc0be8fbdd8b95d8404f170155221859b3c4dbe3d22adb70f0316567efd2\" returns successfully" Feb 13 20:49:42.861622 containerd[1789]: time="2025-02-13T20:49:42.861596563Z" level=info msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.892 [WARNING][5890] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef", Pod:"calico-apiserver-55b4cbcf97-vthz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77159f7d935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.893 [INFO][5890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.893 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" iface="eth0" netns="" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.893 [INFO][5890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.893 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.912 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.912 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.912 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.917 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.917 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.919 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.921687 containerd[1789]: 2025-02-13 20:49:42.920 [INFO][5890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.922293 containerd[1789]: time="2025-02-13T20:49:42.921750468Z" level=info msg="TearDown network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" successfully" Feb 13 20:49:42.922293 containerd[1789]: time="2025-02-13T20:49:42.921792668Z" level=info msg="StopPodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" returns successfully" Feb 13 20:49:42.922393 containerd[1789]: time="2025-02-13T20:49:42.922335571Z" level=info msg="RemovePodSandbox for \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" Feb 13 20:49:42.922393 containerd[1789]: time="2025-02-13T20:49:42.922382171Z" level=info msg="Forcibly stopping sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\"" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.954 [WARNING][5914] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"29f5fcd4-fb6c-471a-836d-3fe2b4dc8a88", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"028240ea28652a136034bf0738fd4d6b046b7f2dbeb9f9247c423e34c48889ef", Pod:"calico-apiserver-55b4cbcf97-vthz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77159f7d935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.954 [INFO][5914] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.954 [INFO][5914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" iface="eth0" netns="" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.954 [INFO][5914] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.954 [INFO][5914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.971 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.972 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.972 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.976 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.976 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" HandleID="k8s-pod-network.6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--vthz8-eth0" Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.979 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:42.981077 containerd[1789]: 2025-02-13 20:49:42.980 [INFO][5914] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077" Feb 13 20:49:42.981752 containerd[1789]: time="2025-02-13T20:49:42.981130869Z" level=info msg="TearDown network for sandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" successfully" Feb 13 20:49:42.990593 containerd[1789]: time="2025-02-13T20:49:42.990537716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:42.990721 containerd[1789]: time="2025-02-13T20:49:42.990610917Z" level=info msg="RemovePodSandbox \"6e5456f3de5a24d3356a03490c5768d013c5c745d2e2957e98ac58c824db7077\" returns successfully" Feb 13 20:49:42.991128 containerd[1789]: time="2025-02-13T20:49:42.991098719Z" level=info msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.021 [WARNING][5938] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0", GenerateName:"calico-kube-controllers-5d5ff6cc45-", Namespace:"calico-system", SelfLink:"", UID:"0c72f7b9-06eb-4b56-8496-0b119694b5cc", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5ff6cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886", Pod:"calico-kube-controllers-5d5ff6cc45-sfbhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2da6400b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.021 [INFO][5938] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.021 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" iface="eth0" netns="" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.021 [INFO][5938] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.022 [INFO][5938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.039 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.039 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.039 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.045 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.045 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.047 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.049335 containerd[1789]: 2025-02-13 20:49:43.048 [INFO][5938] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.050224 containerd[1789]: time="2025-02-13T20:49:43.049377914Z" level=info msg="TearDown network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" successfully" Feb 13 20:49:43.050224 containerd[1789]: time="2025-02-13T20:49:43.049407415Z" level=info msg="StopPodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" returns successfully" Feb 13 20:49:43.050224 containerd[1789]: time="2025-02-13T20:49:43.049975817Z" level=info msg="RemovePodSandbox for \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" Feb 13 20:49:43.050224 containerd[1789]: time="2025-02-13T20:49:43.050008918Z" level=info msg="Forcibly stopping sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\"" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.085 [WARNING][5962] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0", GenerateName:"calico-kube-controllers-5d5ff6cc45-", Namespace:"calico-system", SelfLink:"", UID:"0c72f7b9-06eb-4b56-8496-0b119694b5cc", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5ff6cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"0ef6ea0cf27a9f8c9e1bae6c8d5a5809092a14fa5ebfd6ee9fec5379d430b886", Pod:"calico-kube-controllers-5d5ff6cc45-sfbhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2da6400b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.085 [INFO][5962] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.085 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" iface="eth0" netns="" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.085 [INFO][5962] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.085 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.106 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.106 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.106 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.114 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.114 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" HandleID="k8s-pod-network.94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--kube--controllers--5d5ff6cc45--sfbhx-eth0" Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.115 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.122018 containerd[1789]: 2025-02-13 20:49:43.118 [INFO][5962] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76" Feb 13 20:49:43.122018 containerd[1789]: time="2025-02-13T20:49:43.119863072Z" level=info msg="TearDown network for sandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" successfully" Feb 13 20:49:43.130234 containerd[1789]: time="2025-02-13T20:49:43.129981923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:43.130234 containerd[1789]: time="2025-02-13T20:49:43.130087824Z" level=info msg="RemovePodSandbox \"94f854914bd7d7655810bb3b2fdbd9b31cc62d0666b88244b8c8db6ba2b82e76\" returns successfully" Feb 13 20:49:43.131394 containerd[1789]: time="2025-02-13T20:49:43.131365730Z" level=info msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.173 [WARNING][5986] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdbc6e37-5802-45c9-b35d-a25b6e25224b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb", Pod:"csi-node-driver-rnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali854ddb0d59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.173 [INFO][5986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.173 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" iface="eth0" netns="" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.173 [INFO][5986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.173 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.193 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.193 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.193 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.199 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.199 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.201 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.203132 containerd[1789]: 2025-02-13 20:49:43.202 [INFO][5986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.203802 containerd[1789]: time="2025-02-13T20:49:43.203142294Z" level=info msg="TearDown network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" successfully" Feb 13 20:49:43.203802 containerd[1789]: time="2025-02-13T20:49:43.203171994Z" level=info msg="StopPodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" returns successfully" Feb 13 20:49:43.203802 containerd[1789]: time="2025-02-13T20:49:43.203714097Z" level=info msg="RemovePodSandbox for \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" Feb 13 20:49:43.203802 containerd[1789]: time="2025-02-13T20:49:43.203749297Z" level=info msg="Forcibly stopping sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\"" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.235 [WARNING][6010] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdbc6e37-5802-45c9-b35d-a25b6e25224b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"4d48ab13f53c5588c7f6510c0cbf05b78b284e4671f16aa18c787388dae44bbb", Pod:"csi-node-driver-rnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali854ddb0d59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.236 [INFO][6010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.236 [INFO][6010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" iface="eth0" netns="" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.236 [INFO][6010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.236 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.253 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.254 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.254 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.259 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.259 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" HandleID="k8s-pod-network.420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-csi--node--driver--rnpcs-eth0" Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.260 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.262467 containerd[1789]: 2025-02-13 20:49:43.261 [INFO][6010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a" Feb 13 20:49:43.262467 containerd[1789]: time="2025-02-13T20:49:43.262425994Z" level=info msg="TearDown network for sandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" successfully" Feb 13 20:49:43.270142 containerd[1789]: time="2025-02-13T20:49:43.269838932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:43.270142 containerd[1789]: time="2025-02-13T20:49:43.269919732Z" level=info msg="RemovePodSandbox \"420091d3d5ccc55e4b15bd5ec140f00bd923edc77e5984931fc4e16e445b4d4a\" returns successfully" Feb 13 20:49:43.270673 containerd[1789]: time="2025-02-13T20:49:43.270634036Z" level=info msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.302 [WARNING][6034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"61879ee1-72fa-4d45-9726-2a5c594597b2", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307", Pod:"calico-apiserver-55b4cbcf97-tcxq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384388b40af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.302 [INFO][6034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.302 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" iface="eth0" netns="" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.302 [INFO][6034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.302 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.322 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.322 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.322 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.327 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.327 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.328 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.330227 containerd[1789]: 2025-02-13 20:49:43.329 [INFO][6034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.330227 containerd[1789]: time="2025-02-13T20:49:43.330193738Z" level=info msg="TearDown network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" successfully" Feb 13 20:49:43.330227 containerd[1789]: time="2025-02-13T20:49:43.330227638Z" level=info msg="StopPodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" returns successfully" Feb 13 20:49:43.331123 containerd[1789]: time="2025-02-13T20:49:43.330938441Z" level=info msg="RemovePodSandbox for \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" Feb 13 20:49:43.331123 containerd[1789]: time="2025-02-13T20:49:43.330973542Z" level=info msg="Forcibly stopping sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\"" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.363 [WARNING][6059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0", GenerateName:"calico-apiserver-55b4cbcf97-", Namespace:"calico-apiserver", SelfLink:"", UID:"61879ee1-72fa-4d45-9726-2a5c594597b2", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4cbcf97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-faf44fbcb5", ContainerID:"5b7e8a112267c6bf903adc5be8dacb7e792cc9cbb0783c68b4d404a69cb37307", Pod:"calico-apiserver-55b4cbcf97-tcxq9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384388b40af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.363 [INFO][6059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.363 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" iface="eth0" netns="" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.363 [INFO][6059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.363 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.383 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.383 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.383 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.389 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.389 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" HandleID="k8s-pod-network.0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Workload="ci--4081.3.1--a--faf44fbcb5-k8s-calico--apiserver--55b4cbcf97--tcxq9-eth0" Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.391 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:49:43.392889 containerd[1789]: 2025-02-13 20:49:43.391 [INFO][6059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e" Feb 13 20:49:43.392889 containerd[1789]: time="2025-02-13T20:49:43.392834855Z" level=info msg="TearDown network for sandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" successfully" Feb 13 20:49:43.401240 containerd[1789]: time="2025-02-13T20:49:43.401196598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:49:43.401360 containerd[1789]: time="2025-02-13T20:49:43.401271198Z" level=info msg="RemovePodSandbox \"0e5dc9b51be5f8724579db8847d3fa7f83261766be46b371faf1405351893a0e\" returns successfully" Feb 13 20:50:07.987105 kubelet[3444]: I0213 20:50:07.986853 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:50:43.218696 systemd[1]: Started sshd@7-10.200.8.38:22-10.200.16.10:60008.service - OpenSSH per-connection server daemon (10.200.16.10:60008). Feb 13 20:50:43.840595 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 60008 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:50:43.842125 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:43.846379 systemd-logind[1766]: New session 10 of user core. Feb 13 20:50:43.852467 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:50:44.358047 sshd[6207]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:44.361378 systemd[1]: sshd@7-10.200.8.38:22-10.200.16.10:60008.service: Deactivated successfully. Feb 13 20:50:44.366838 systemd-logind[1766]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:50:44.368312 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:50:44.369470 systemd-logind[1766]: Removed session 10. Feb 13 20:50:49.466533 systemd[1]: Started sshd@8-10.200.8.38:22-10.200.16.10:53482.service - OpenSSH per-connection server daemon (10.200.16.10:53482). Feb 13 20:50:50.093197 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 53482 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:50:50.094650 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:50.099280 systemd-logind[1766]: New session 11 of user core. Feb 13 20:50:50.102537 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:50:50.591332 sshd[6240]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:50.594207 systemd[1]: sshd@8-10.200.8.38:22-10.200.16.10:53482.service: Deactivated successfully. Feb 13 20:50:50.599861 systemd-logind[1766]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:50:50.600508 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:50:50.601690 systemd-logind[1766]: Removed session 11. Feb 13 20:50:55.699856 systemd[1]: Started sshd@9-10.200.8.38:22-10.200.16.10:53492.service - OpenSSH per-connection server daemon (10.200.16.10:53492). Feb 13 20:50:56.320162 sshd[6263]: Accepted publickey for core from 10.200.16.10 port 53492 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:50:56.321883 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:56.326028 systemd-logind[1766]: New session 12 of user core. Feb 13 20:50:56.331447 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:50:56.817281 sshd[6263]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:56.820780 systemd[1]: sshd@9-10.200.8.38:22-10.200.16.10:53492.service: Deactivated successfully. Feb 13 20:50:56.826258 systemd-logind[1766]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:50:56.827299 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:50:56.829165 systemd-logind[1766]: Removed session 12. Feb 13 20:51:01.228795 systemd[1]: run-containerd-runc-k8s.io-1d22313c154a3c09352f9f5b85a6ef150c1a74f895f1a0e5ce2e096c88c1e771-runc.aSn9fW.mount: Deactivated successfully. Feb 13 20:51:01.924826 systemd[1]: Started sshd@10-10.200.8.38:22-10.200.16.10:39408.service - OpenSSH per-connection server daemon (10.200.16.10:39408). Feb 13 20:51:02.547221 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 39408 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:02.550759 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:02.559291 systemd-logind[1766]: New session 13 of user core. Feb 13 20:51:02.563423 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:51:03.042154 sshd[6301]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:03.045363 systemd[1]: sshd@10-10.200.8.38:22-10.200.16.10:39408.service: Deactivated successfully. Feb 13 20:51:03.051087 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:51:03.051986 systemd-logind[1766]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:51:03.053079 systemd-logind[1766]: Removed session 13. Feb 13 20:51:03.149789 systemd[1]: Started sshd@11-10.200.8.38:22-10.200.16.10:39416.service - OpenSSH per-connection server daemon (10.200.16.10:39416). Feb 13 20:51:03.771764 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 39416 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:03.773603 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:03.778490 systemd-logind[1766]: New session 14 of user core. Feb 13 20:51:03.783887 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:51:04.306913 sshd[6315]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:04.310241 systemd[1]: sshd@11-10.200.8.38:22-10.200.16.10:39416.service: Deactivated successfully. Feb 13 20:51:04.316054 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:51:04.316378 systemd-logind[1766]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:51:04.317792 systemd-logind[1766]: Removed session 14. Feb 13 20:51:04.413774 systemd[1]: Started sshd@12-10.200.8.38:22-10.200.16.10:39424.service - OpenSSH per-connection server daemon (10.200.16.10:39424). Feb 13 20:51:05.039300 sshd[6344]: Accepted publickey for core from 10.200.16.10 port 39424 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:05.040818 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:05.046110 systemd-logind[1766]: New session 15 of user core. Feb 13 20:51:05.051519 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:51:05.538086 sshd[6344]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:05.541509 systemd[1]: sshd@12-10.200.8.38:22-10.200.16.10:39424.service: Deactivated successfully. Feb 13 20:51:05.546013 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:51:05.547905 systemd-logind[1766]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:51:05.548998 systemd-logind[1766]: Removed session 15. Feb 13 20:51:10.646835 systemd[1]: Started sshd@13-10.200.8.38:22-10.200.16.10:37204.service - OpenSSH per-connection server daemon (10.200.16.10:37204). Feb 13 20:51:11.268089 sshd[6378]: Accepted publickey for core from 10.200.16.10 port 37204 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:11.270004 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:11.274251 systemd-logind[1766]: New session 16 of user core. Feb 13 20:51:11.278428 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:51:11.763937 sshd[6378]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:11.768783 systemd[1]: sshd@13-10.200.8.38:22-10.200.16.10:37204.service: Deactivated successfully. Feb 13 20:51:11.774155 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:51:11.775346 systemd-logind[1766]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:51:11.776324 systemd-logind[1766]: Removed session 16. Feb 13 20:51:16.872925 systemd[1]: Started sshd@14-10.200.8.38:22-10.200.16.10:37212.service - OpenSSH per-connection server daemon (10.200.16.10:37212). Feb 13 20:51:17.493141 sshd[6395]: Accepted publickey for core from 10.200.16.10 port 37212 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:17.494988 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:17.500063 systemd-logind[1766]: New session 17 of user core. Feb 13 20:51:17.505448 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:51:17.990745 sshd[6395]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:17.994313 systemd[1]: sshd@14-10.200.8.38:22-10.200.16.10:37212.service: Deactivated successfully. Feb 13 20:51:18.000812 systemd-logind[1766]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:51:18.001675 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:51:18.002836 systemd-logind[1766]: Removed session 17. Feb 13 20:51:23.096683 systemd[1]: Started sshd@15-10.200.8.38:22-10.200.16.10:53460.service - OpenSSH per-connection server daemon (10.200.16.10:53460). Feb 13 20:51:23.716203 sshd[6427]: Accepted publickey for core from 10.200.16.10 port 53460 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:23.718047 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:23.725006 systemd-logind[1766]: New session 18 of user core. Feb 13 20:51:23.730786 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:51:24.213363 sshd[6427]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:24.217021 systemd[1]: sshd@15-10.200.8.38:22-10.200.16.10:53460.service: Deactivated successfully. Feb 13 20:51:24.223836 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:51:24.224977 systemd-logind[1766]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:51:24.226089 systemd-logind[1766]: Removed session 18. Feb 13 20:51:24.319694 systemd[1]: Started sshd@16-10.200.8.38:22-10.200.16.10:53468.service - OpenSSH per-connection server daemon (10.200.16.10:53468). Feb 13 20:51:24.940783 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 53468 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:24.942526 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:24.947948 systemd-logind[1766]: New session 19 of user core. Feb 13 20:51:24.951581 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:51:25.504018 sshd[6441]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:25.507162 systemd[1]: sshd@16-10.200.8.38:22-10.200.16.10:53468.service: Deactivated successfully. Feb 13 20:51:25.512838 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:51:25.513763 systemd-logind[1766]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:51:25.514772 systemd-logind[1766]: Removed session 19. Feb 13 20:51:25.611478 systemd[1]: Started sshd@17-10.200.8.38:22-10.200.16.10:53484.service - OpenSSH per-connection server daemon (10.200.16.10:53484). Feb 13 20:51:26.231097 sshd[6452]: Accepted publickey for core from 10.200.16.10 port 53484 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:26.232736 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:26.236788 systemd-logind[1766]: New session 20 of user core. Feb 13 20:51:26.241510 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:51:28.445958 sshd[6452]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:28.449162 systemd[1]: sshd@17-10.200.8.38:22-10.200.16.10:53484.service: Deactivated successfully. Feb 13 20:51:28.454813 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:51:28.455831 systemd-logind[1766]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:51:28.456831 systemd-logind[1766]: Removed session 20. Feb 13 20:51:28.553759 systemd[1]: Started sshd@18-10.200.8.38:22-10.200.16.10:53486.service - OpenSSH per-connection server daemon (10.200.16.10:53486). Feb 13 20:51:29.172710 sshd[6473]: Accepted publickey for core from 10.200.16.10 port 53486 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:29.174280 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:29.178285 systemd-logind[1766]: New session 21 of user core. Feb 13 20:51:29.183478 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:51:29.795092 sshd[6473]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:29.798134 systemd[1]: sshd@18-10.200.8.38:22-10.200.16.10:53486.service: Deactivated successfully. Feb 13 20:51:29.803060 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:51:29.804875 systemd-logind[1766]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:51:29.805891 systemd-logind[1766]: Removed session 21. Feb 13 20:51:29.902839 systemd[1]: Started sshd@19-10.200.8.38:22-10.200.16.10:43276.service - OpenSSH per-connection server daemon (10.200.16.10:43276). Feb 13 20:51:30.525888 sshd[6485]: Accepted publickey for core from 10.200.16.10 port 43276 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:30.527506 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:30.531855 systemd-logind[1766]: New session 22 of user core. Feb 13 20:51:30.538686 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:51:31.019141 sshd[6485]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:31.022817 systemd[1]: sshd@19-10.200.8.38:22-10.200.16.10:43276.service: Deactivated successfully. Feb 13 20:51:31.029384 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:51:31.030227 systemd-logind[1766]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:51:31.031174 systemd-logind[1766]: Removed session 22. Feb 13 20:51:36.127465 systemd[1]: Started sshd@20-10.200.8.38:22-10.200.16.10:43286.service - OpenSSH per-connection server daemon (10.200.16.10:43286). Feb 13 20:51:36.746175 sshd[6521]: Accepted publickey for core from 10.200.16.10 port 43286 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:36.747888 sshd[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:36.752076 systemd-logind[1766]: New session 23 of user core. Feb 13 20:51:36.757657 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:51:37.274679 sshd[6521]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:37.279468 systemd[1]: sshd@20-10.200.8.38:22-10.200.16.10:43286.service: Deactivated successfully. Feb 13 20:51:37.284041 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:51:37.285243 systemd-logind[1766]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:51:37.286249 systemd-logind[1766]: Removed session 23. Feb 13 20:51:42.383478 systemd[1]: Started sshd@21-10.200.8.38:22-10.200.16.10:50200.service - OpenSSH per-connection server daemon (10.200.16.10:50200). Feb 13 20:51:43.002904 sshd[6535]: Accepted publickey for core from 10.200.16.10 port 50200 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:43.004447 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:43.008430 systemd-logind[1766]: New session 24 of user core. Feb 13 20:51:43.011488 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:51:43.502046 sshd[6535]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:43.508640 systemd[1]: sshd@21-10.200.8.38:22-10.200.16.10:50200.service: Deactivated successfully. Feb 13 20:51:43.512515 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:51:43.512924 systemd-logind[1766]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:51:43.514524 systemd-logind[1766]: Removed session 24. Feb 13 20:51:48.608457 systemd[1]: Started sshd@22-10.200.8.38:22-10.200.16.10:50216.service - OpenSSH per-connection server daemon (10.200.16.10:50216). Feb 13 20:51:49.229727 sshd[6570]: Accepted publickey for core from 10.200.16.10 port 50216 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:49.231298 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:49.235366 systemd-logind[1766]: New session 25 of user core. Feb 13 20:51:49.242434 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:51:49.723803 sshd[6570]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:49.726799 systemd[1]: sshd@22-10.200.8.38:22-10.200.16.10:50216.service: Deactivated successfully. Feb 13 20:51:49.731127 systemd-logind[1766]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:51:49.732863 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:51:49.734896 systemd-logind[1766]: Removed session 25. Feb 13 20:51:54.833530 systemd[1]: Started sshd@23-10.200.8.38:22-10.200.16.10:59698.service - OpenSSH per-connection server daemon (10.200.16.10:59698). Feb 13 20:51:55.453728 sshd[6583]: Accepted publickey for core from 10.200.16.10 port 59698 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:51:55.455525 sshd[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:55.461079 systemd-logind[1766]: New session 26 of user core. Feb 13 20:51:55.467416 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:51:55.952770 sshd[6583]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:55.956266 systemd[1]: sshd@23-10.200.8.38:22-10.200.16.10:59698.service: Deactivated successfully. Feb 13 20:51:55.962703 systemd-logind[1766]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:51:55.963531 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:51:55.964566 systemd-logind[1766]: Removed session 26. Feb 13 20:52:01.059485 systemd[1]: Started sshd@24-10.200.8.38:22-10.200.16.10:48670.service - OpenSSH per-connection server daemon (10.200.16.10:48670). Feb 13 20:52:01.679772 sshd[6604]: Accepted publickey for core from 10.200.16.10 port 48670 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:52:01.681588 sshd[6604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:01.687194 systemd-logind[1766]: New session 27 of user core. Feb 13 20:52:01.693524 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:52:02.174427 sshd[6604]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:02.177459 systemd[1]: sshd@24-10.200.8.38:22-10.200.16.10:48670.service: Deactivated successfully. Feb 13 20:52:02.182331 systemd-logind[1766]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:52:02.183161 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:52:02.185637 systemd-logind[1766]: Removed session 27. Feb 13 20:52:07.282478 systemd[1]: Started sshd@25-10.200.8.38:22-10.200.16.10:48686.service - OpenSSH per-connection server daemon (10.200.16.10:48686). Feb 13 20:52:07.903592 sshd[6661]: Accepted publickey for core from 10.200.16.10 port 48686 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:52:07.905107 sshd[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:07.909502 systemd-logind[1766]: New session 28 of user core. Feb 13 20:52:07.912504 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:52:08.397120 sshd[6661]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:08.401215 systemd[1]: sshd@25-10.200.8.38:22-10.200.16.10:48686.service: Deactivated successfully. Feb 13 20:52:08.405980 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:52:08.406876 systemd-logind[1766]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:52:08.407872 systemd-logind[1766]: Removed session 28. Feb 13 20:52:13.502970 systemd[1]: Started sshd@26-10.200.8.38:22-10.200.16.10:51092.service - OpenSSH per-connection server daemon (10.200.16.10:51092). Feb 13 20:52:14.132986 sshd[6675]: Accepted publickey for core from 10.200.16.10 port 51092 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:52:14.134713 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:14.139473 systemd-logind[1766]: New session 29 of user core. Feb 13 20:52:14.142537 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:52:14.629273 sshd[6675]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:14.632553 systemd[1]: sshd@26-10.200.8.38:22-10.200.16.10:51092.service: Deactivated successfully. Feb 13 20:52:14.638536 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:52:14.639489 systemd-logind[1766]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:52:14.640571 systemd-logind[1766]: Removed session 29. Feb 13 20:52:19.737702 systemd[1]: Started sshd@27-10.200.8.38:22-10.200.16.10:41728.service - OpenSSH per-connection server daemon (10.200.16.10:41728). Feb 13 20:52:20.356744 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 41728 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:52:20.358319 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:20.362389 systemd-logind[1766]: New session 30 of user core. Feb 13 20:52:20.366484 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:52:20.852660 sshd[6720]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:20.855816 systemd[1]: sshd@27-10.200.8.38:22-10.200.16.10:41728.service: Deactivated successfully. Feb 13 20:52:20.861370 systemd-logind[1766]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:52:20.861683 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:52:20.863129 systemd-logind[1766]: Removed session 30. Feb 13 20:52:25.963561 systemd[1]: Started sshd@28-10.200.8.38:22-10.200.16.10:41744.service - OpenSSH per-connection server daemon (10.200.16.10:41744). Feb 13 20:52:26.583721 sshd[6734]: Accepted publickey for core from 10.200.16.10 port 41744 ssh2: RSA SHA256:h0QKYRpaJ3YpuLtK0dnejrIa1CFLKJSOg8yF75uXhP0 Feb 13 20:52:26.585358 sshd[6734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:26.590106 systemd-logind[1766]: New session 31 of user core. Feb 13 20:52:26.595522 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:52:27.080817 sshd[6734]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:27.084417 systemd[1]: sshd@28-10.200.8.38:22-10.200.16.10:41744.service: Deactivated successfully. Feb 13 20:52:27.090023 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:52:27.091106 systemd-logind[1766]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:52:27.092035 systemd-logind[1766]: Removed session 31.